spotifyovercastrssapple-podcasts

elm-concurrent-task with Andrew MacMurray

Andrew MacMurray joins us to discuss `elm-concurrent-task` which allows you to run JavaScript functions with a Task style API.
November 6, 2023
#94

Transcript

[00:00:00]
Hello Jeroen.
[00:00:02]
Hello Dillon.
[00:00:03]
So right before recording you just admitted that you didn't have a pun.
[00:00:07]
That's true.
[00:00:08]
I think that's going to disappoint a lot of listeners.
[00:00:11]
So what we can do is we can both concurrently try to come up with a good pun.
[00:00:17]
That's going to be our task.
[00:00:19]
That's the task you've set for us.
[00:00:20]
I see what you did there.
[00:00:21]
I see what you did there.
[00:00:23]
Did I succeed?
[00:00:24]
The task may have failed, but we might have a little bit of failure recovery,
[00:00:30]
some error handling on this.
[00:00:32]
Let's try to end then and move on from this sequentially.
[00:00:37]
I highly approve of these puns.
[00:00:41]
Andrew, please save us from these puns.
[00:00:44]
We've got Andrew McMurray joining us.
[00:00:46]
Thanks so much for coming on the show, Andrew.
[00:00:47]
Hello.
[00:00:48]
Thanks for having me.
[00:00:49]
Pleasure to have you.
[00:00:50]
And today we are talking about your LMS.
[00:00:54]
Elm Concurrent Task Package.
[00:00:56]
So, well, to start us off, what is Elm Concurrent Task in a nutshell?
[00:01:02]
Like, what even is it and what does it do?
[00:01:07]
Well, it is, I guess you could call it, it's a task-like API.
[00:01:12]
So if you've used Elm Cause Task, it's designed to look very similar.
[00:01:17]
The main difference, I'd say, is that Elm Cause Task runs,
[00:01:23]
it's map to map.
[00:01:24]
Three and friends, it runs each of those one after the other.
[00:01:28]
Whereas the name implies an Elm Concurrent Task, it runs them at the same time.
[00:01:32]
So you can run like a tree of these concurrently.
[00:01:36]
The other thing it does is it also provides a convenient JavaScript FFI.
[00:01:42]
So you can call, the idea is that you could call a sequence of JavaScript
[00:01:46]
functions and chain the results together.
[00:01:49]
This has also been called task ports in the past.
[00:01:52]
And if you've.
[00:01:53]
If you've used Elm Pages v3 and you're familiar with backend task, it is basically a standalone
[00:02:01]
or more or less a standalone implementation of that.
[00:02:04]
So, yeah.
[00:02:06]
Right.
[00:02:07]
And one that can be run on the front end too, which backend tasks as the name implies, cannot be.
[00:02:15]
True.
[00:02:16]
Yes.
[00:02:17]
But I will add, it was fully inspired by backend task.
[00:02:21]
You gave me the idea, Dillon, and I just took him around with it.
[00:02:23]
So.
[00:02:24]
Fantastic.
[00:02:25]
I love that.
[00:02:26]
Yeah.
[00:02:26]
And I, on the point of running concurrently rather than sequentially, I just want to maybe give people an example to, to motivate that a little bit because, you know, it sounds, it sounds pretty in the weeds, but if you think about it, if you say, if you have two tasks defined for making HTTP requests, you know, to get the user users profile information and get the current weather.
[00:02:53]
Yeah.
[00:02:53]
Yeah.
[00:02:53]
Right.
[00:02:55]
There's some latency for that.
[00:02:57]
And, you know, maybe it takes some time to process the request on, on these different respective backends.
[00:03:04]
If you do test on map two, it's going to run the first one.
[00:03:07]
And then once that's done, it's going to go and run the second one, which is not great.
[00:03:13]
So it's a serious problem.
[00:03:16]
Why isn't that great in this case?
[00:03:19]
Well, because you, you stack the latency of the one on top of the other.
[00:03:23]
You, you turn it into a, you turn something that doesn't need to be a waterfall, i.e.
[00:03:29]
stacked one in front of the other finishing before the next one starts.
[00:03:33]
You, you could just fire them off both at the same time.
[00:03:36]
And that's sort of like the basic structure of, of web browsers is designed and, and of JavaScript of the JavaScript runtime.
[00:03:45]
It's designed to be really good at doing things asynchronously.
[00:03:50]
So it's at its best when you can sort of.
[00:03:53]
Fire things off and let them asynchronously come back and continue on.
[00:04:00]
It's not good when you hold up the event loop or, or waterfall things too much.
[00:04:09]
Yeah.
[00:04:09]
I actually recently learned that all the map functions are sequential.
[00:04:14]
Like I thought they were concurrent, but no.
[00:04:18]
It's surprising, right?
[00:04:19]
Yeah.
[00:04:19]
Yeah.
[00:04:20]
So in the example of the.
[00:04:23]
You make two requests, one to get the time or the weather and one another to get some other information.
[00:04:29]
Doesn't matter too much.
[00:04:30]
It's just slower, which has some impact obviously.
[00:04:34]
But if you like, you make a get request and another request to a post request or put a request to actively do something and you expect those two to, to be done, even if the first one fails, then that changes the behavior.
[00:04:52]
Right?
[00:04:53]
True.
[00:04:53]
Although.
[00:04:53]
You could always, um, Hmm.
[00:04:57]
You know, I'm not even sure what task dot map two and in the Elm core task definitions would do, but yeah, maybe it would not fire the second one.
[00:05:07]
You could always do.
[00:05:07]
It doesn't, it doesn't like it, it actually does like a task that, and then on the first one, right.
[00:05:14]
And then it runs the second one.
[00:05:16]
So nothing gets sent out or triggered before you, before the first one completes.
[00:05:22]
Um, we.
[00:05:23]
We just had a new simplification in the arm review, simplify that if you have like task dot map three and the second element, uh, is clearly something that will fail.
[00:05:35]
Then the map three gets changed to a map two and we remove the third or the fourth argument.
[00:05:42]
Like if it's a task dot fail or whatever.
[00:05:45]
Yeah.
[00:05:46]
We, which is not going to happen very often, but it's now you will have like something that tells you like, Hey, this is simple.
[00:05:53]
For.
[00:05:53]
Um, simplifiable or here's maybe something you didn't expect.
[00:05:57]
That is interesting.
[00:05:58]
Yeah.
[00:05:58]
Cause generally the Elm standard libraries have very intuitive semantics, but this is definitely one that's surprising.
[00:06:06]
And it is interesting because it is technically possible to define, and there are packages out there that define a way to use the Elm task package to to do parallel tasks.
[00:06:19]
Um, so that is technically possible, but but yeah, so, so.
[00:06:23]
So the FFI part of Elm concurrent task, I think is also really crucial.
[00:06:30]
So, and, and now the idea here you talk about in the read me is, is a hack free of way of doing FFI.
[00:06:37]
I think sometimes in the Elm community, FFI is sort of looked at as a dirty word, but I mean, we need to interact with things outside of the sandbox of Elm in some way.
[00:06:48]
So it's a question of how do we do that?
[00:06:52]
So, so what does that.
[00:06:53]
Look like an Elm concurrent task?
[00:06:56]
What does that look like?
[00:06:57]
So very similar to backend task, funnily enough, the idea is you would define a function name in Elm.
[00:07:06]
So like a string, like the, the string function name, you give it, uh, a decoder effectively, like what you expect to come back from the function, some way of interpreting the errors too, and then an encoded set of arguments.
[00:07:22]
And.
[00:07:23]
On the other side, the JavaScript side, there's a, there's a JavaScript package that goes alongside the Elm package that just looks up the stringified name in effectively like an object of functions that you provide on the JavaScript side.
[00:07:39]
So it's when I saw it in backend test, cause I was like, that's such a simple, elegant idea.
[00:07:45]
And it actually has quite a nice, very nice kind of back and forth between the two packages.
[00:07:50]
And I think the main, the main safety.
[00:07:53]
That you provide is like library authors is telling Elm when things like the function hasn't been defined or like if it throws an exception or doesn't give you back what you expect to give back.
[00:08:06]
You're sort of baking all of that bits where if you did it in an application without some of the wiring, you could get, you know, unpleasant errors, things that go wrong.
[00:08:15]
They use your stuff when you're interacting with ports.
[00:08:18]
Right.
[00:08:19]
Right.
[00:08:19]
And if you contrast that with defining a.
[00:08:23]
Port in, in Elm and running that as a command or a subscription, if you, let's say define an outgoing port to set an item in local storage, I believe if you, if you throw an exception in a port in Elm, it's just throwing that exception in, in JavaScript.
[00:08:43]
Right.
[00:08:44]
So it's not, there are no guardrails that are going to prevent that unless you add them in yourself.
[00:08:49]
Exactly.
[00:08:50]
So what would be the impact here?
[00:08:51]
Would it cancel all the other?
[00:08:53]
Tasks that are remaining?
[00:08:55]
There's some, so it would, there's, there's actually two kinds of errors that you can get back.
[00:09:01]
You'd, you'd receive an error back through via concurrent task, and you can define certain tasks where you expect an exception to be thrown.
[00:09:12]
You can say like, catch the exception and then lift it into an error type.
[00:09:17]
Or you can, there's, there's actually kind of a lower level, which are called unexpected errors where like.
[00:09:23]
If you're, if you're not expecting this, this JavaScript function to throw an exception and then something happens, it's kind of like it will cancel everything.
[00:09:31]
And then you get a descriptive error message on the other side.
[00:09:36]
So yeah, that's, that's I've been, I've, I've tried it out in a couple of, a couple of applications.
[00:09:43]
That's pattern seems to have worked quite nicely.
[00:09:45]
So I'm a little bit confused about something.
[00:09:48]
What happens if in one of the ports through concurrent task or just plain.
[00:09:53]
Regular ports on JavaScript side, what happens if there's an exception that is being thrown synchronously?
[00:10:00]
Does that then gets does that then cancel all the other tasks that are related to, or in the same batch of commands being sent?
[00:10:12]
Are those canceled?
[00:10:13]
Do you know that by any chance?
[00:10:15]
So with the, if you've got a batch, like a batch that's gone out and one of them throws.
[00:10:23]
Yeah.
[00:10:23]
It's not clever enough to like stop that.
[00:10:26]
The, the ones that are in flight at that moment, they've already gone through the port, but any, any, any followup ones from that will be canceled.
[00:10:37]
So those that are like, and then, or map on that one will be canceled, but the other batch commands will just go through.
[00:10:44]
So yeah.
[00:10:45]
Which I think makes sense.
[00:10:47]
Like that.
[00:10:48]
There's no reason that maybe they're disconnected.
[00:10:52]
They I'm doing.
[00:10:53]
One thing and another totally unrelated thing.
[00:10:56]
And if the first one fails, then I don't want that to be affected.
[00:11:01]
Yeah.
[00:11:01]
It depends on the case, but I guess having the possibility to say, well, these are connected and these are not connected.
[00:11:08]
Let's have them be separate just in case.
[00:11:11]
Who knows?
[00:11:12]
Okay.
[00:11:12]
So that's, that's interesting.
[00:11:14]
If, if it's something that folks are actually interested in this definitely using like timeouts and HTTP, because one of the things that to do to make it compatible.
[00:11:23]
With well fill in the functionality of existing Elm core tasks, because that's, that's one thing that they're not actually compatible with Elm cores tasks, because they're from
[00:11:34]
many different types, but re-implementing HTTP timeouts, there's something called an abort signal in the fetch API.
[00:11:42]
So that may be, that may be a way of like, if people want cancelable, say like you've got a batch of like a hundred tests that have gone out.
[00:11:51]
One of them fails.
[00:11:52]
And you actually don't want any of the other ones to carry on.
[00:11:55]
You could, then maybe there's a way of sending them an abort signal to say, just stop.
[00:12:00]
Like, so you kind of reclaim memory or they may be they're expensive HTTP requests that you want to stop.
[00:12:07]
But I, I totally agree with you saying like, it's probably a case by case basis, like necessarily want that on every single task you define.
[00:12:16]
Yeah.
[00:12:17]
So you said something interesting there.
[00:12:18]
You had to re-implement HTTP task.
[00:12:21]
What is that?
[00:12:22]
What is that all about?
[00:12:24]
So they, the concurrent task, like task type is not, uh, unfortunately it's like, it's not the same as the Elm core task type.
[00:12:35]
So if you wanted to use like time dot now or process dot sleep from Elm cores library, they wouldn't, they wouldn't fit in.
[00:12:44]
So the it's actually, I mean, I took the idea from Dillon as well, who did the same with backend task and that you, you write.
[00:12:51]
Yeah.
[00:12:51]
Yeah.
[00:12:51]
Yeah.
[00:12:52]
Yeah.
[00:12:52]
Yeah.
[00:12:52]
Yeah.
[00:12:52]
Yeah.
[00:12:52]
Yeah.
[00:12:52]
Yeah.
[00:12:52]
Yeah.
[00:12:52]
Yeah.
[00:12:52]
Yeah.
[00:12:52]
Yeah.
[00:12:52]
Yeah.
[00:12:52]
Yeah.
[00:12:52]
Yeah.
[00:12:52]
Yeah.
[00:12:52]
Yeah.
[00:12:52]
Yeah.
[00:12:59]
Yeah.
[00:13:00]
Yeah.
[00:13:00]
Yeah.
[00:13:01]
Yeah.
[00:13:02]
Yeah.
[00:13:02]
Yeah.
[00:13:02]
Yeah.
[00:13:04]
Yeah.
[00:13:05]
Yeah.
[00:13:05]
Yeah.
[00:13:06]
Yeah.
[00:13:06]
Yeah.
[00:13:06]
Yeah.
[00:13:07]
Yeah.
[00:13:07]
Yeah.
[00:13:08]
Yeah.
[00:13:08]
Yeah.
[00:13:08]
So at the concurrent task HTTP actually calls the fetch API underneath under the hood,
[00:13:14]
which is actually, which is actually nice because, you know, the built in Elm HTTP package uses,
[00:13:22]
uses XHR requests, which is kind of an outdated standard.
[00:13:28]
And there are some performance improvements and modernizations.
[00:13:32]
They're subtle, but there are some minor performance improvements
[00:13:36]
and general improvements in fetch.
[00:13:38]
So that's kind of a nice feature.
[00:13:40]
Okay, so that means that you had to implement some of the things in Elm
[00:13:45]
and some things in JavaScript.
[00:13:47]
And does that mean that your Elm package does not work without the JavaScript package?
[00:13:53]
Like, for instance, I thought that the JavaScript package was if you wanted to enable task ports,
[00:14:00]
but it just doesn't work without setting up that JavaScript library.
[00:14:05]
That's right, yeah, you do need the JavaScript library too,
[00:14:08]
because the JavaScript library will wire in the ports that you provide from Elm.
[00:14:14]
It's actually the thing that runs.
[00:14:17]
It runs the tasks themselves.
[00:14:20]
Right, so even if you wanted to run two HTTP requests
[00:14:26]
or two time.now functions that call out to the core one,
[00:14:32]
you wouldn't be able to run them concurrently
[00:14:34]
because you need a new primitive to run them concurrently.
[00:14:38]
Gotcha.
[00:14:38]
Yeah, so it's basically using the exact same mechanism
[00:14:44]
for these internal primitives.
[00:14:47]
Primitives of, you know, concurrent task.now and HTTP
[00:14:52]
as a user would use to define their own custom task definitions in JavaScript.
[00:15:00]
Basically, the only difference is that when you call the code in JavaScript
[00:15:05]
to set up your additional definitions for concurrent tasks,
[00:15:11]
it already comes with some sort of pre-installed sort of core concurrent task definition.
[00:15:17]
That's the only difference, I think.
[00:15:19]
So how do you actually make those tasks concurrent in JavaScript?
[00:15:25]
Do you wrap everything in a set immediate callback
[00:15:29]
or I don't even remember the name of the function?
[00:15:32]
So it's basically you're sending out,
[00:15:35]
I call them internally, I call them task definitions.
[00:15:38]
So it's like that record of the function name,
[00:15:41]
the arguments that you're sending.
[00:15:44]
You send those all out in a command.batch.
[00:15:46]
Okay.
[00:15:47]
So it's like everything goes out the port whenever it's ready to be executed.
[00:15:51]
And then the JavaScript runner will just run them.
[00:15:55]
And then once it's got the results, it calls an incoming port with an ID.
[00:16:03]
The main challenge of that package and the core bit of it
[00:16:08]
is actually just attaching IDs to all of the tasks
[00:16:11]
and then sending them back through and being able to reassociate them.
[00:16:15]
So because Elm, as it stands, can run lots of things concurrently,
[00:16:22]
like command.batch will let you run as many things as you can imagine
[00:16:27]
or that the program will take.
[00:16:30]
But the problem is actually being able to associate them with the previous call.
[00:16:35]
So that's all the wiring that this package is doing
[00:16:38]
to make it appear like they're just happening one after the other.
[00:16:42]
Right.
[00:16:42]
And as far as set timeout and things like that,
[00:16:45]
the way that JavaScript works,
[00:16:48]
things are asynchronous in the event loop by default.
[00:16:53]
So when you do fetch,
[00:16:56]
you don't have to do anything for that to be an asynchronous task.
[00:17:01]
You just fire it off and it's going to be running
[00:17:05]
and then continue the event loop.
[00:17:07]
But the set timeout thing is more of a hack
[00:17:10]
or the run immediate or whatever.
[00:17:12]
Those hacks are more for getting the event loop,
[00:17:15]
to tick, to try to hack the processing of things in the event loop.
[00:17:21]
Yeah.
[00:17:22]
So the example that I had in mind where things are not async by default
[00:17:27]
is the time.now command
[00:17:30]
because that just calls date.now in JavaScript
[00:17:33]
and sends it back to the port or to the message in Elm.
[00:17:39]
So that one is not asynchronous by default.
[00:17:42]
So do you wrap that one back into a set timeout?
[00:17:45]
Or is it fine if it's just synchronous?
[00:17:49]
It's not currently wrapped in a set timeout.
[00:17:52]
I think there's an inevitable very small amount of lag probably
[00:17:59]
in that it has to be sent.
[00:18:01]
It probably does this just by default of it having to go out of the port
[00:18:05]
and then back in.
[00:18:07]
There'll be a very small...
[00:18:09]
I don't know actually how I'd measure the lag that it's got on it.
[00:18:15]
I mean, it's also like, we're not...
[00:18:19]
I mean, it's not like we're interested in the moment of when the update function returns, right?
[00:18:25]
It's when you get a message back.
[00:18:27]
Like you will have some time between the two,
[00:18:30]
between the return of the update and the start of the update function again.
[00:18:34]
And thankfully, we're not that...
[00:18:36]
We don't need to be that precise in browsers usually, hopefully.
[00:18:40]
Yeah.
[00:18:41]
Not often at least.
[00:18:42]
True.
[00:18:44]
Yeah.
[00:18:44]
I think there are like...
[00:18:46]
There's asynchronous for like things performing in parallel.
[00:18:50]
And then there are like the semantics of the sort of chain of concurrent tasks.
[00:18:55]
And if, you know, assuming that it's similar under the hood to Elm Pages,
[00:19:00]
it's basically like able to just resolve to check,
[00:19:04]
is this task complete and continue the chain of remaining tasks
[00:19:10]
based on which ones needed to be completed before it continues.
[00:19:15]
So it's not...
[00:19:18]
That part of it is not using this sort of JavaScript event loop.
[00:19:23]
It's just checking whenever it can, if it's ready for the next task.
[00:19:28]
So for the...
[00:19:30]
Another comparison between commands and tasks,
[00:19:35]
I always find that that's one of these kind of almost rough edges in Elm
[00:19:41]
is like there are these two similar but different ways of doing things.
[00:19:45]
Yeah.
[00:19:45]
Yeah.
[00:19:45]
Yeah.
[00:19:45]
Yeah.
[00:19:45]
Yeah.
[00:19:45]
Yeah.
[00:19:45]
Yeah.
[00:19:45]
Yeah.
[00:19:45]
There are different ways of executing things.
[00:19:48]
And the semantics are a little bit different.
[00:19:51]
Like a, you know, task has an error type and they're chainable
[00:19:57]
and they're sequential, but they can technically be done in parallel.
[00:20:02]
And like...
[00:20:03]
And you might get a response, which might not...
[00:20:04]
Which is not the case for...
[00:20:06]
Well, you will always get a response, which is not the case for commands.
[00:20:10]
Right.
[00:20:11]
That's an interesting one.
[00:20:12]
I think that's like the main reason why you don't do that.
[00:20:15]
have what commands and tasks are different and then the other strange thing is like there are
[00:20:21]
only a handful of things in elm that you can use to create like a native task like http and some of
[00:20:30]
the you know dom operations give you tasks but then a lot of things there just is no task version
[00:20:38]
of you just do a command for example ports like you define a port and it gives you you know a task
[00:20:44]
for an outgoing port but i think a lot of people say well but i want to be able to
[00:20:51]
have a perform an http request and then take some decoded data from that http request and
[00:20:59]
write that to local storage and then i want to go and perform an http post request and then i want
[00:21:06]
to do this so like but you can't chain an http request with a port because
[00:21:14]
It's a command.
[00:21:15]
So I've always felt like sometimes when people are complaining about Elm being very limited
[00:21:23]
in the way you can do FFI, that there's no FFI in Elm.
[00:21:27]
You can only do ports or web components, right?
[00:21:32]
Those are like sort of the two standard ways to do communication with JavaScript and Elm.
[00:21:38]
Well, I feel like one of the biggest pain points of that, at least for me personally,
[00:21:44]
is just that it's very awkward to do that with a command.
[00:21:48]
I want a sort of chainable task style way of doing that.
[00:21:52]
I want to be able to include that in a chain with HTTP requests and other types of tasks.
[00:21:58]
So I think it's very cool that you've sort of built this abstraction that with minimal
[00:22:04]
hacks gives you a way to kind of do these different things all in that same chainable
[00:22:11]
paradigm.
[00:22:12]
With minimal JavaScripts.
[00:22:14]
Set up.
[00:22:15]
Right.
[00:22:15]
Would you say it's a hack, Andrew?
[00:22:18]
The only reason it's not a hack is comparing it to the other ways that you have been done
[00:22:23]
in the past to make this kind of thing work.
[00:22:26]
Those are definitely hacks.
[00:22:28]
Why don't you explain a couple of those approaches that other tools have used?
[00:22:35]
Sure.
[00:22:36]
There was one that I used in the past on an application that did inspire some of the work
[00:22:42]
on this package.
[00:22:44]
There's a package called Elm Taskport.
[00:22:47]
And it's a very clever hack.
[00:22:50]
And the idea is that you monkey patch XML HTTP requests.
[00:22:57]
So you add, like, you modify that global object to whenever you're sending off a HTTP request
[00:23:04]
with, say, you've got like a special URL scheme.
[00:23:08]
You might be like Elm custom function.
[00:23:11]
Your monkey packs version will.
[00:23:14]
Check for that.
[00:23:15]
That URL.
[00:23:16]
And then you can call custom JavaScript from inside there.
[00:23:20]
And it works really well.
[00:23:23]
It's clever.
[00:23:23]
But you're modifying, like, global objects, which is a little bit dodgy.
[00:23:27]
So I think that's, like, that's without a doubt a hack.
[00:23:31]
Yeah.
[00:23:32]
Don't let your mother know about this.
[00:23:34]
Exactly.
[00:23:35]
But what your mother don't know won't hurt her.
[00:23:38]
Well.
[00:23:38]
I mean, if you whack your mother in the back of her.
[00:23:44]
Of her head.
[00:23:44]
She won't know.
[00:23:46]
Yes, it will hurt her.
[00:23:49]
What is that idiom?
[00:23:50]
Like, that makes no sense.
[00:23:53]
Don't modify global objects.
[00:23:55]
If you can help it.
[00:23:57]
Just don't hurt your mother, Dillon.
[00:24:04]
But that is the interesting thing about this approach is that.
[00:24:08]
I mean, your read me.
[00:24:11]
Andrew talks about this as FFI.
[00:24:14]
And conceptually it is.
[00:24:16]
But it's actually just doing this, like, accepted thing in Elm, which is like passing in some flags.
[00:24:23]
And, you know, it's not using any frowned upon techniques to do these things.
[00:24:29]
It's just an abstraction that makes it feel a little bit more natural to interrupt with JavaScript.
[00:24:35]
Exactly.
[00:24:36]
I think the main thing that really I found very interesting with that approach was you can use it.
[00:24:44]
You can use it in Node or in the browser.
[00:24:47]
Whereas if you've got if you're relying on the hacks like XMLHttpQuest, there's another hack with service workers that you can do.
[00:24:54]
Like they work great on the browser.
[00:24:56]
But if you wanted to run it in like, say, you wanted to run it in Dino or Node, like you have to add polyfills in and you're getting into a real fun situation at that point.
[00:25:07]
And now let's imagine that's Elm got a new release of ElmHttp, which now uses fetch.
[00:25:13]
Yeah.
[00:25:13]
Yeah.
[00:25:13]
Well, now you can't use that hack anymore.
[00:25:17]
So aren't you glad that you can that it uses XMLHttpQuest?
[00:25:20]
Exactly.
[00:25:22]
True.
[00:25:23]
And then actually, if you want to use an Elm platform worker with Node.js, you also have to use a sort of polyfill to make XHR requests work in Node.js with Elm.
[00:25:37]
So this one in a way.
[00:25:39]
Yeah.
[00:25:41]
Otherwise, HTTP requests will fail.
[00:25:43]
Oh.
[00:25:43]
I was like, I mean, if that's the case, and I would have known about this, but because Elm reviews is using Node.js, I'm not making any HTTP requests from the Elm app.
[00:25:56]
So there you go.
[00:25:56]
That's why I didn't know where I forgot about it.
[00:26:00]
Yeah.
[00:26:01]
So if we go back to that weirdness between commands and tasks.
[00:26:05]
So the way that I always understood it is with a tasker, you always have the you always have.
[00:26:13]
A response guaranteed or someone guaranteed.
[00:26:18]
So if you call time that now, then it will respond right away with the date.
[00:26:25]
If you do HTTP request, then it will respond when the response comes back or after a time out.
[00:26:34]
So one way or another, it will always come back unless the user exits the page before end.
[00:26:41]
But the problem with commands is especially.
[00:26:43]
With ports, you don't you don't have the guarantee.
[00:26:46]
So you can't make a request to a port and expect a response to come back.
[00:26:51]
I don't know if it practice it matters much because I don't know if it's a problem if there is a task hanging waiting for a response which will never come back.
[00:27:04]
Yeah.
[00:27:04]
As soon as some things are implemented by the user, you don't have the guarantee that it will come back.
[00:27:10]
But what I can imagine is that.
[00:27:12]
If you do like if we if we know that everyone uses concurrent task and that scheme and everything is set up as it should, then the line between what tasks and commands are useful for slims down a lot.
[00:27:30]
And I don't look don't know if there's actually any reason not to have the same API for both.
[00:27:38]
Well, one thing that I'm going to test doesn't help is sending a request.
[00:27:42]
And not expecting a response.
[00:27:45]
That is the only command is the only one that can do that.
[00:27:48]
Not not with many things, though.
[00:27:50]
But yeah, I think the what you could do in our current task seems like the pattern.
[00:27:56]
If you say you do something like log to the console, like you just return unit an empty tuple.
[00:28:03]
It is kind of like it's sort of a response.
[00:28:06]
But yeah, that's what you do in a lot of things with the Elms tasks API as well.
[00:28:12]
But I don't know if there's a real reason for all of them responding.
[00:28:18]
Right.
[00:28:19]
Maybe it's just like, well, if we have a way to send a task request and not get a response and some of the APIs get weird, maybe that's it's like, oh, well, we just did this entire HTTP chain function.
[00:28:34]
And then at the end, because you use this function now, we won't get a response.
[00:28:40]
And that's unexpected.
[00:28:42]
Or.
[00:28:42]
That's just like, oh, that's a bit weird.
[00:28:45]
Like it's a foot gun is what I mean.
[00:28:48]
Yeah, you're right.
[00:28:49]
It does give you the option to before like there's always the guarantee that tasks will complete.
[00:28:56]
Whereas you're not if you if you can write them yourself, you can break those guarantees.
[00:29:01]
Yeah. Although there's there's always sort of an illusion of guarantees any time you're working with JavaScript, right?
[00:29:08]
Like JavaScript is inherently something that you can't explain.
[00:29:12]
You can't expect to be well behaved.
[00:29:15]
And as anybody who has used it knows, I think.
[00:29:20]
And so, for example, it can throw exceptions like that's just a thing that JavaScript code can do.
[00:29:27]
And so you have to handle that for that.
[00:29:30]
And it may also time out.
[00:29:33]
That's like another class of poorly behaved behavior it could fall into.
[00:29:39]
Right.
[00:29:39]
But you could have a framework like.
[00:29:42]
So you could have a framework like Elm concurrent task, wrap the thing into a try catch.
[00:29:49]
Exactly.
[00:29:50]
And have a timeout.
[00:29:52]
That's right.
[00:29:52]
Yeah, exactly.
[00:29:53]
And then you're sure to have a response.
[00:29:55]
Yes.
[00:29:56]
Will the timeout be long enough?
[00:29:57]
That's a different question.
[00:29:59]
But.
[00:30:00]
Right.
[00:30:00]
And like, do you also want a response that is a timeout?
[00:30:07]
Plenty of cases like I think you would prefer not have the response.
[00:30:11]
But you would prefer to have a timeout or at least it could be possible.
[00:30:14]
If I know.
[00:30:15]
So one thing you mentioned as well is when you're getting back the data from JavaScript, you have a decoder.
[00:30:24]
How does that work?
[00:30:25]
And what happens if the decoding fails or do you need to do?
[00:30:30]
Do you just get a JSON encode value and you need to decode it manually?
[00:30:35]
Exactly.
[00:30:36]
So you get back a JSON encode value.
[00:30:39]
There.
[00:30:40]
Right under the hood, it will be a JSON encode value.
[00:30:42]
The JSON decoder gets run on it.
[00:30:46]
If it succeeds, you get the value back through your.
[00:30:50]
I've called it in the readme like the task flow, which is like your kind of regular error success flow that you do when you're chaining tasks together.
[00:31:01]
If there's some error handlers, there's some functions that you can use to say, like, if you get a decode error.
[00:31:10]
Do something with it.
[00:31:12]
You can either, like, fail with a custom error or you could, like, recover it and lift it into your, like, success type.
[00:31:18]
But if you don't add that in, there's what's called, like, an unexpected error gets returned out through the on response callback, like a message that you provide.
[00:31:32]
And that stops everything.
[00:31:34]
You mean it gets sent back to JavaScript again?
[00:31:37]
No.
[00:31:38]
So this is all Elm side.
[00:31:39]
Like, it's not.
[00:31:40]
Oh, yeah.
[00:31:41]
On the Elm side.
[00:31:42]
Yeah.
[00:31:43]
On the JavaScript side, it's all, like, it doesn't really know anything about it.
[00:31:47]
And the ports are just, like, just send me, like, JSON encode values back through.
[00:31:52]
So it's very those ports are very safe from, like, I think you'd have a very hard time to make them blow up.
[00:32:01]
But then on the Elm side, it's all handling the, like, what do you expect to come back from the task?
[00:32:07]
And if it does something weird, it's, like, stop.
[00:32:09]
Abort.
[00:32:10]
Like, abort the task chain.
[00:32:11]
Like, don't send any more values back out through JavaScript at that point.
[00:32:16]
Mm-hmm.
[00:32:17]
So if somebody wants to wire this into an app, what does that involve kind of getting it set up with the boilerplate?
[00:32:26]
What do you mean?
[00:32:28]
Sorry.
[00:32:29]
Basically, what do you need in your model?
[00:32:32]
What do you need on the JavaScript side to wire it in?
[00:32:35]
What other messages and all that sort of thing do you need?
[00:32:38]
So you need a couple of things.
[00:32:42]
There's in your model, there's what's effectively the task, the tasks model.
[00:32:48]
I've called it a task pool.
[00:32:50]
The name could change.
[00:32:52]
But it's an idea that you can have multiple.
[00:32:56]
Yeah, you could have, like, multiple tasks running at the same time.
[00:33:00]
So it's, like, this is something that's managing all of the state internally of the in-flight tasks.
[00:33:07]
And you might want to swim in it, a task pool.
[00:33:10]
Exactly.
[00:33:11]
Yeah, exactly.
[00:33:12]
Every time.
[00:33:13]
Yeah.
[00:33:14]
So you need the task pool.
[00:33:16]
There's two messages.
[00:33:18]
So there's one that I've called, like, on progress.
[00:33:21]
So that's kind of, like, the internal wiring that's, like, every time you get, like, a response through the port, it's, like, kind of funneling that back through.
[00:33:30]
It's just, like, updating the progress of the task and then giving the next command to be performed.
[00:33:36]
Yeah.
[00:33:37]
And then there's one message which is, like, when everything's done, like, you've got, like, a final result from the task, which is either, like, an unexpected error or the success.
[00:33:50]
Okay.
[00:33:51]
And then there's just some subscriptions to wire all of that up together.
[00:33:56]
So it is a little bit of boilerplate.
[00:33:59]
The advantage of the core task or command is all that, like, wiring is hidden away from you.
[00:34:06]
So you can just fire the task and then give it, like, a callback, like a success callback.
[00:34:12]
Then in, like, a command's case, it's just, like, just fire it and then stuff will happen.
[00:34:18]
Yeah.
[00:34:20]
So, like, you abstract away a few of the details of the JavaScript wiring.
[00:34:26]
So, like, you initialize your Elm application and then you can, you pass in a JavaScript object with all of your port definitions.
[00:34:35]
And your port definitions are, you know, instead of, like, going through that dance of the incoming and outgoing ports and wiring those in and adding subscriptions.
[00:34:46]
And then if you have a subscriptions port that you stop using and then the Elm compiler or the JavaScript code crashes because it's no longer defining this port because it's unused code in your Elm code.
[00:35:00]
And all these, like, rough edges kind of go away.
[00:35:02]
And instead you just are defining these ports, these tasks for Elm concurrent task in a set of key value pairs.
[00:35:12]
You give it a name.
[00:35:13]
And you give it an async function or a synchronous function.
[00:35:14]
And then you can just go back and do it.
[00:35:15]
Okay.
[00:35:16]
So, you can do a synchronous function if you want in JavaScript.
[00:35:18]
And return some data and give it a decoder.
[00:35:22]
So, like, that feels like the right mental model for my brain to interact with that.
[00:35:29]
So, it's a little bit of wiring, but it cleans up a lot of things as well.
[00:35:33]
That's good to hear.
[00:35:35]
It's good to hear the mental model fits.
[00:35:37]
I've tried to abstract as many of those wiring details away.
[00:35:43]
But there is inevitably a bit of.
[00:35:45]
Spoiler plate there.
[00:35:46]
You just can't escape.
[00:35:47]
Yeah.
[00:35:48]
Definitely.
[00:35:49]
In this community, we have long accepted boilerplate as being okay.
[00:35:55]
Like, sure, if we could use less, we would not say no.
[00:36:00]
But, I mean, we're okay with it, right?
[00:36:03]
Yeah.
[00:36:04]
I think for me, it's like as long as I don't have to think too much about the boilerplate, then I'm happy with it.
[00:36:10]
Like, I don't want to be figuring out complicated logic.
[00:36:14]
It's just like.
[00:36:15]
Give me a list of things I need to provide and I'll do it.
[00:36:20]
Actually, like, there's one thing that I'm wondering about.
[00:36:23]
So, in the JavaScript, you set a, you call a function called concurrent task dot register.
[00:36:31]
And you give it your ports, your input ports and your output port.
[00:36:36]
And a list of tasks.
[00:36:37]
So, those are the, that is the object which contains names of the functions and then what to do with the arguments.
[00:36:44]
That were sent.
[00:36:46]
And every time you need to ask to add a new task, you need to both do it in the JavaScript side and do it in the Elm side.
[00:36:56]
And you have to make sure that those are always in sync, right?
[00:37:00]
Have you had any problems so far with them?
[00:37:03]
Like, oh, I forgot to set up one up or I had a typo in one of them.
[00:37:10]
I'm guessing not, but.
[00:37:12]
Well, it.
[00:37:13]
I mean, that inevitably happened.
[00:37:15]
I spent a bit of time wiring it into, like, an existing, like, running application that I've got.
[00:37:23]
And you spell the names wrong.
[00:37:25]
You're like, oh, damn, I missed that one.
[00:37:27]
It gives you, so it will give you back an unexpected error at that point, which is, like, it says, like, I couldn't find, I couldn't find a registered task with this name.
[00:37:37]
So, it's not perfect.
[00:37:39]
You can, like, you can mismatch the names.
[00:37:41]
But it does give you some hints to it.
[00:37:43]
You can say, like, you might have a typo or maybe you forgot to add it into the JavaScript object of, like, your list of tasks.
[00:37:51]
Yes.
[00:37:52]
So, this is, like, one of the things that I would like to do with Elm review.
[00:37:55]
And I put it on pause, but it's something that I'm pretty close to having done, which is having Elm review look at both Elm code and other files.
[00:38:06]
So, you could have, in theory, or it would work with this new feature that I'm working on.
[00:38:11]
That I have somewhere stashed up in a Git branch.
[00:38:16]
You could have a rule that says, I want to look at all the Elm code, but I also want to look at this specific JavaScript file or just all JavaScript files.
[00:38:28]
And it could look at the list of tasks that you define in your call to concurrent register.
[00:38:34]
And then it can compare that with what you have in Elm on the Elm side.
[00:38:38]
And it could figure out, like, oh, I have this.
[00:38:40]
It could figure out, like, oh, well, the one you call on the Elm side does not exist in JavaScript.
[00:38:45]
Or even, like, oh, you have a task on the JavaScript side that is defined, but that is never used in Elm land.
[00:38:54]
So, that's something that I would like to see happen at some point.
[00:38:58]
Just, like, an extra layer of guarantees.
[00:39:01]
Like, you're sure that you have boilerplate, but it's done correctly for sure.
[00:39:06]
I would like to see that.
[00:39:08]
Yeah.
[00:39:09]
Because that case where you, that's an interesting one that you mentioned where you can't know whether it's unused.
[00:39:16]
Even if they do match up, you're not sure if there's no way of figuring out unless you've got something like Elm review to tell you, like, you can get rid of these two.
[00:39:24]
Like, they're never called.
[00:39:26]
The less JavaScript, the better.
[00:39:30]
As long as you put as many rules as possible, then it's fine.
[00:39:35]
Yeah.
[00:39:36]
Just exchange, like, 10 lines of JavaScript for 10 lines of JavaScript.
[00:39:37]
Yeah.
[00:39:38]
And then you can just write it for 10 Elm review rules and, yeah, it will be worth it, right?
[00:39:43]
Exactly.
[00:39:44]
It's great.
[00:39:45]
I'm obviously biased.
[00:39:50]
Something doesn't sound right here.
[00:39:53]
So, this might be less of a 1.0 thing, Andrew, and more of a future thing.
[00:39:59]
But I'm curious, especially as a maintainer of a similar mechanism who's been thinking about these things as well.
[00:40:07]
What are your thoughts on unit testing something like this?
[00:40:13]
I think there are ways to make it doable in a pretty nice way, but it takes a little bit of introducing some abstractions to make that possible.
[00:40:22]
Is that something you've thought about?
[00:40:24]
Very interesting you mentioned that.
[00:40:26]
So, I had the same thought.
[00:40:28]
I was like, I really wish I had a way to try these in isolation and, like, what are they doing in the real world?
[00:40:34]
And I wrote for that runs on CI.
[00:40:35]
And I wrote for that runs on CI.
[00:40:36]
And I wrote for that runs on CI.
[00:40:38]
And then they changed what I'm doing.
[00:40:40]
But I've got, like, a little integration test runner that's, like, a custom.
[00:40:43]
It's kind of a custom program for it.
[00:40:45]
And it's not fancy, but it, like, it does the trick.
[00:40:50]
Like, it's kind of a simple way of saying, like, I expect this value to be a success and you can match on this.
[00:40:57]
You can assert on, like, how long the tasks take.
[00:41:00]
So, I've got one that it was actually very hard to test the batch.
[00:41:04]
It was very hard.
[00:41:05]
It was very hard.
[00:41:06]
It was very hard.
[00:41:07]
So, this batch implementation of, like, running a very large, very large list of tasks, running that in Elm test for some reason, which I haven't figured out, it does not enjoy it.
[00:41:17]
It really, really struggles.
[00:41:19]
But if you're actually running it in a real application, it goes very quickly and doesn't have any problems.
[00:41:25]
If anybody knows why, that would be, I'm all open to suggestions.
[00:41:29]
But having that confidence, being able to actually run the tasks in, like, a test suite was really, really, really hard.
[00:41:33]
I mean, I think that the fact that I had to run all of my tasks in, like, a test suite was really, really valuable to make sure that I didn't mess any of the wiring up or stuff was coming back strange.
[00:41:45]
So, that is a JavaScript test suite, right, which runs the Elm or partial Elm application maybe?
[00:41:51]
So, it's actually just an Elm application.
[00:41:54]
And I've written, it's like a single, like, a platform worker application.
[00:42:01]
And you define, like, a list of tasks in your, like, in your test file.
[00:42:05]
And then you're just passing those to the runner and it runs them.
[00:42:08]
And then you can, there's some Elm functions to match on the results that come back.
[00:42:13]
It's not, it's not packaged up at the moment.
[00:42:16]
But if it's something that folks are interested in, like, if there's a need for it, I could definitely think about extracting it out into something that's a bit more usable in other projects.
[00:42:26]
My initial thought would be to use Elm program test for this.
[00:42:30]
Yeah.
[00:42:31]
And then have a library that makes Elm concur task work for Elm program test.
[00:42:39]
And where you would mock the JavaScript responses or the HTTP responses and so on.
[00:42:47]
And I think that would work quite well.
[00:42:49]
And it would probably work in Elm test.
[00:42:51]
Maybe you should open an issue for that problem you've had if you haven't already.
[00:42:57]
But, yeah, you're right that you're mocking the JavaScript side.
[00:43:00]
So it wouldn't be as reliable as your integration tests.
[00:43:07]
I mean, integration tests are usually more reliable, just not as precise as unit tests.
[00:43:12]
Yeah.
[00:43:13]
That's a very interesting idea.
[00:43:15]
Yeah.
[00:43:16]
It makes me think of, like, Elm program test, obviously.
[00:43:18]
And then Martin Stewart's Elm program test for Landerer where you get responses from the backend, interactions with the backend, with the frontend, and all those kinds of things that you could simulate.
[00:43:29]
And that's, since it's all just Elm code, it's fine.
[00:43:33]
I haven't seen that before, the Elm program test for Landerer.
[00:43:37]
It's worth checking out.
[00:43:38]
There's a great talk where Martin walks through using the tool and shows the visual runner with all the connected clients.
[00:43:46]
It's very cool.
[00:43:47]
And, yeah, it gets really interesting, too, if you had some way.
[00:43:52]
I really like the Elm program test idea as well.
[00:43:54]
And, like, if you had some way to even have a few sort of clients.
[00:43:58]
Yeah.
[00:43:59]
I mean, I think it's a really good idea.
[00:44:00]
And I think it's a really good idea to have a few sort of core implementations for things like local storage where you could, you know, you could imagine if you wanted to get really fancy with this.
[00:44:10]
Oh.
[00:44:11]
You could say, given this initial value in local storage and then let it actually simulate for the setting and getting in local storage.
[00:44:19]
Let it actually simulate that.
[00:44:21]
And now you could have Elm program test.
[00:44:24]
You could even have a way to expand some of these definitions.
[00:44:27]
Where there's, like, a certain set of core web platform primitives that you're able to simulate in a fairly realistic way.
[00:44:37]
So, it's really interesting, man.
[00:44:39]
I think you could go one or two or ten steps further.
[00:44:42]
Like, if you try to simulate HTTP, you just simulate a browser and DNS systems and everything.
[00:44:49]
And, I mean, how hard can it just be to simulate Google.com?
[00:44:56]
Any Elm package?
[00:44:57]
Like, come on.
[00:44:58]
You can do it, Andrew.
[00:44:59]
Exactly.
[00:45:00]
I will say it too after this call.
[00:45:01]
Assuming that there's no free will and that the universe is deterministic and not probabilistic, in theory you could simulate the entire world and all its behaviors in a pure way.
[00:45:06]
Just as a pure function.
[00:45:07]
So, that's if you want to go a few steps further than that.
[00:45:08]
I think that's a good idea.
[00:45:09]
I think that's a good idea.
[00:45:10]
Yeah.
[00:45:11]
I think that's a good idea.
[00:45:12]
I think that's a good idea.
[00:45:13]
Yeah.
[00:45:14]
I think that's a good idea.
[00:45:15]
Yeah.
[00:45:16]
I think that's a good idea.
[00:45:17]
Yeah.
[00:45:18]
I think that's a good idea.
[00:45:19]
Yeah.
[00:45:20]
Yeah.
[00:45:21]
Yeah.
[00:45:22]
Yeah.
[00:45:23]
Yeah.
[00:45:24]
Yeah.
[00:45:25]
Yeah.
[00:45:26]
Yeah.
[00:45:27]
Yeah.
[00:45:28]
I feel like you went from giving him, like, oh, this is not for V1 but this could be for V2.
[00:45:33]
And now we're, like, can you just reimplement all of computer science please?
[00:45:40]
PR is welcome.
[00:45:48]
Yeah.
[00:45:54]
So, one thing we haven't asked.
[00:45:56]
you about or maybe not enough like what kind of use cases did you actually use this uh
[00:46:03]
package for like when did you feel the need for something like this yeah that's a good question
[00:46:10]
i so my my initial need for it is about a year and a bit ago i was working on a small medical app
[00:46:19]
and i wrote the back end in elm as an experiment and thought this is great it's lovely just just
[00:46:26]
such a nice pleasant experience and i'd set it up where each hdp endpoint was effectively a task
[00:46:33]
and i was using elm task port then and everything it was working great i was really really happy with
[00:46:39]
the with the the readability and like how easy it was to change stuff but the only problem i found
[00:46:48]
was that these tasks were performed one after the other so all of the subtasks and there were a
[00:46:55]
couple of things that i had to do to get it up and there were a couple of i had it in a cli application
[00:47:01]
as well that was kind of doing some background stuff on like there was some sort of like work
[00:47:06]
processes that was running and it just made me think this would be so much nicer if i could run
[00:47:12]
a lot if i could batch up a lot of these tasks at the same time so and then i did that and now
[00:47:19]
that's all really fast and works exactly the same so well almost exactly the same it's yeah so that's
[00:47:25]
i from my perspective i'd say like these kind of bucket like if you're using kind of a back end
[00:47:31]
as like hdp endpoints this like clear sequence of tasks i've found and like a cli application if you
[00:47:39]
got a cli command like very much like elm pages scripts it's the same kind of idea like those
[00:47:46]
i found those use cases really they fit really nicely there yeah because you want your server
[00:47:53]
to be as stateless as possible
[00:47:55]
ideally or each endpoint but then because you have to chain tasks that go through
[00:48:02]
transform to command now you need to store the state of ongoing requests in the model and yeah
[00:48:09]
that doesn't feel very good you can imagine i definitely tried that to start with and quickly
[00:48:16]
got oh my goodness this is it's just not a nice it's definitely not a nice way of writing for
[00:48:21]
exactly as you say like a stateless a stateless process
[00:48:25]
so it's it's basically the the pattern of a functional core imperative shell which i think
[00:48:33]
is a really nice model and it's like so elm is a really good sort of central processing unit
[00:48:41]
brain for like the the hub of everything and then the messy details as needed you can
[00:48:48]
delegate out to javascript code and it can do whatever you need it to do and you can define
[00:48:55]
an asynchronous code that's you know just going and doing what it does best like firing something
[00:49:03]
off and coming back on the event loop when it's done and not holding up the event loop from
[00:49:08]
continuing while it processes those things so yeah actually like elm is pretty nice for for
[00:49:15]
using for these sorts of uh functional core imperative shell ways of scripting doing some
[00:49:22]
sort of back-end work or or running in your front end
[00:49:25]
So why did you call it Elm concurrent task and not Elm script?
[00:49:30]
Like, just like JavaScript.
[00:49:35]
Elm script.
[00:49:37]
Elm script, yeah.
[00:49:39]
I actually wouldn't be surprised if there was already a project called that.
[00:49:44]
Yeah, probably, actually.
[00:49:46]
Sorry if I don't remember you, author of Elm script.
[00:49:51]
I am curious about the word concurrent.
[00:49:54]
So it's the same thing as parallel.
[00:49:57]
The exact same thing.
[00:49:59]
Yes.
[00:50:00]
Let's open up that can of worms, Andrew, if you'd be so kind.
[00:50:09]
To be honest with you, my initial calling of concurrent task is there's already Elm
[00:50:15]
task parallel as a package out there.
[00:50:19]
One that would be very confusing.
[00:50:21]
Like, which one do I use?
[00:50:23]
I think the other.
[00:50:24]
The reason I add on the side of calling it concurrent was that technically
[00:50:29]
JavaScript is single threaded.
[00:50:32]
So it's a, it's a, it's a, it's a technicality that, um, yeah, is that
[00:50:39]
technically things in that environment cannot run in parallel.
[00:50:45]
Like true parallelism is like on different cores.
[00:50:48]
Let's say, but they're not just parallel, but yeah, I thought the
[00:50:53]
parallel.
[00:50:54]
That educational means doing the same task on different data at the same
[00:50:59]
time and concurrent just like two things are running at the same time,
[00:51:04]
but they might be doing it different things.
[00:51:07]
I feel like.
[00:51:08]
I'm right, but I also feel like people are
[00:51:10]
just gonna shut up.
[00:51:11]
Me?
[00:51:12]
Sorry, listener.
[00:51:13]
is, there's a thread.
[00:51:16]
There's a long thread on own Slack.
[00:51:18]
If it's still there, I
[00:51:19]
could, .
[00:51:20]
Yeah.
[00:51:21]
I feel like it made the same point.
[00:51:22]
And then people corrected.
[00:51:23]
Me there.
[00:51:23]
There's a thread, just one thread,
[00:51:25]
or are there multiple threads on it?
[00:51:29]
I would have to reread it and process all that.
[00:51:37]
It's much of a muchness, I would say.
[00:51:39]
You could call it either.
[00:51:42]
You could call it parallel, concurrent.
[00:51:44]
To be honest with you, I think my main motivator is
[00:51:47]
I didn't want it to clash with the other package
[00:51:49]
that's already out there.
[00:51:50]
Just because it's like if you're reaching for that same name,
[00:51:56]
it's a small thing, but it's likely to cause a bit of confusion.
[00:52:00]
But I think your name is technically correct anyway.
[00:52:04]
Technically.
[00:52:05]
I think it is.
[00:52:09]
So technically correct is the best kind of correct.
[00:52:16]
Yeah, so I've definitely thought about,
[00:52:18]
Andrew and I have discussed whether
[00:52:20]
backend task would be a nice fit for sort of using
[00:52:25]
the concurrent task API under the hood
[00:52:28]
and maybe even making it interoperable.
[00:52:30]
But it's definitely a similar paradigm.
[00:52:34]
And it makes me think also like as a framework,
[00:52:38]
Elm pages would have the ability to give you
[00:52:41]
a little bit of wiring,
[00:52:44]
like all of the boilerplate for adding the right thing to your model
[00:52:48]
and defining.
[00:52:49]
And then also, you know,
[00:52:50]
you could be adding an incoming and outgoing port and.
[00:52:52]
And making sure that there's no typo in the names of.
[00:52:56]
Right.
[00:52:57]
Yeah, exactly.
[00:52:58]
All these little details and right.
[00:53:00]
Adding the right message.
[00:53:01]
They're just like a handful of little things.
[00:53:03]
But Elm pages could build in something like this for you.
[00:53:06]
Because in my opinion, like this is kind of like the ideal paradigm
[00:53:10]
for using these things.
[00:53:12]
So that could be an interesting thing to explore as well.
[00:53:14]
Definitely.
[00:53:15]
It's just I know it's a lot of work actually.
[00:53:18]
Yeah.
[00:53:19]
I think it's really important to be able to actually rewiring all of it in
[00:53:22]
because it's yeah, you've like Elm pages is an expansive API.
[00:53:27]
It's very impressive what it's doing.
[00:53:29]
But I yeah, it would be a lot of work actually wiring it in.
[00:53:34]
So it's only like 50 modules.
[00:53:38]
Yeah, right.
[00:53:40]
I know.
[00:53:41]
And it only took him like a month to release or something.
[00:53:46]
Something like that.
[00:53:47]
Yeah.
[00:53:48]
I think it's a bunch of credits.
[00:53:50]
It all goes to his head.
[00:53:52]
I wonder if there is a way we could do it incrementally like to get some useful feedback
[00:53:59]
on like is it providing the functionality that you need or are there some like weird
[00:54:04]
edge cases?
[00:54:05]
We don't have to like reimplement everything under the hood with that.
[00:54:10]
I'm not sure how to do it.
[00:54:12]
That's the tricky thing when it's like such a backbone of it and there are all these like
[00:54:16]
things that are.
[00:54:17]
Deeply integrated but but it would be so nice if like the community could sort of coalesce
[00:54:24]
on like one nice way to do this and yeah, I definitely think you've hit up hit upon
[00:54:29]
a really elegant formulation of this sort of thing that as a community we've been iterating
[00:54:35]
on for a while in some way shape or form and starting to feel like it's really coming together
[00:54:41]
as a as a paradigm.
[00:54:42]
Andrew.
[00:54:43]
Have you by any chance heard about the Elm package JS?
[00:54:46]
Approach from Lambda.
[00:54:48]
I haven't no.
[00:54:49]
Okay, Dillon.
[00:54:50]
Do you remember much about that?
[00:54:52]
Because I have yeah, I don't but I feel like there's some kind of overlap here where basically
[00:54:59]
Lambda is a full Elm JavaScript.
[00:55:02]
Sorry full Elm framework.
[00:55:04]
So back ends and front end isn't in Elm and there are you can't use ports or at least
[00:55:09]
at some point you can use ports.
[00:55:10]
So there's no way to do that.
[00:55:12]
But I mean, I think there is a whole range of possibilities.
[00:55:14]
Yeah.
[00:55:15]
point you couldn't use sports but in some cases like it's necessary even just to use more recent
[00:55:24]
browser functionalities so mario mario rojic suggested an approach called elm package yes pkg
[00:55:37]
to make that work and i don't remember how it works i wonder whether it works somewhat similarly
[00:55:42]
like there are names that you can call and they are available as elm values or ports or
[00:55:50]
i don't remember maybe something to look at yeah and there was also a similar like
[00:55:57]
a goal i think of giving a standard way of you know if if somebody implements uh
[00:56:04]
something for local storage or copying to clipboard or these basic web platform adapters
[00:56:11]
that
[00:56:12]
you can sort of plug and play with different adapters that the community is maintaining
[00:56:17]
yeah and i mean actually like so elm packages would be a solution for that
[00:56:22]
i guess but i would have to look at it again but you could also potentially do that
[00:56:27]
with elm taskport like you already have a javascript implementation that just adds
[00:56:34]
arbitrary primitives and you make them available in elmland you you have the power to
[00:56:42]
add new primitives like that that is what this framework enables right yeah i've been working
[00:56:48]
on my own set of uh primitives that are useful for something like node i had it was interesting
[00:56:56]
i had um there's an example in the examples repo where oh sorry in the example in the examples in
[00:57:03]
the elm concurrent task repo there's like it's a little kind of pipeline worker so it's like it's
[00:57:09]
like a platform platform worker that will like list all the different tools in the elm taskport
[00:57:12]
listen it's listening out for like sqs messages and then orchestrates uh sorry like a amazon sqs
[00:57:19]
like a queue so it's like this yeah it's this thing like on a queue and then does like a bunch
[00:57:25]
of a bunch of things with various aws services and i had like i was quite surprised at how pleasant
[00:57:32]
it was to write so there was some very minimal bindings to the aws sdk so i mean that's a much
[00:57:40]
bigger example you don't want to go like full hog on that but actually having a set of like simple
[00:57:45]
primitives like you could have for the web platform or you could have with things on node like this
[00:57:51]
seems to work quite nicely and then you just use as dylan you were saying before you just use elm
[00:57:56]
as the brain effectively for orchestrating all of these together so it's interesting having a
[00:58:02]
standard i like the idea of having a some kind of standard way of distributing those two two things
[00:58:09]
alongside like the javascript
[00:58:10]
and the elm module yeah i mean like to me it's it's one of the things that's so at the heart of
[00:58:18]
elm and its design is this this separation of the clean pure elm sandbox from the rest of the world
[00:58:26]
but then to me the the current design of ports as a command subscription sort of thing
[00:58:36]
i know there is this concept of like the actor
[00:58:39]
actor
[00:58:40]
actor
[00:58:40]
actor
[00:58:40]
actor
[00:58:40]
model and and kind of sending something out into the world and not necessarily tying that together
[00:58:49]
to something that comes back in from the outside world and that that is an idea that i think evan
[00:58:56]
did design intentionally to like have it be maybe more fault tolerant like you know like elixir or
[00:59:03]
um erlang's sort of actor model for for how the erlang vm works
[00:59:10]
but i to me the thing that makes that's essential to elm is the purity of the elm sandbox
[00:59:17]
and all bets are off for anything outside of that in the outside world and this preserves that but
[00:59:24]
it makes it much more manageable to work with that and then similarly like if you can install
[00:59:31]
elm packages and they violate those guarantees and expectations that would go against the
[00:59:40]
that core design to me
[00:59:42]
um so if you could install a package and it has kernel code or it has ports defined but at the same time like we've been trying i think in the elm community for a long time to find a nice way to share a definition for
[00:59:55]
how you define something for local storage and these basic things you know how you use the
[01:00:03]
internationalization apis or apis that need to be built into the web platform they're important we need to use them for a lot of different things but i think it's a good idea to
[01:00:10]
as web developers um but how do you share them and the type definitions for them and stuff like that and we don't want
[01:00:17]
ffi bindings that we can just trust but i feel like there is something like elm elm pkg js elm package js
[01:00:26]
mario's specification is trying to address this problem in some way where it's like a shareable
[01:00:32]
definition that has an elm type associated with it and elm concurrent task does seem like it would
[01:00:40]
pair really well with the elm package js but i think it's a good idea to use this as a way to
[01:00:40]
nicely with that, like letting you define these tasks. And if you could like install a task,
[01:00:47]
that could certainly be very interesting. And again, other people may have different opinions
[01:00:53]
on this, but to me, that preserves what's essential about the guarantees of Elm, which is
[01:00:58]
you can't trust the outside world. You're still not trusting it, but you're making it a little
[01:01:03]
more convenient to use it in a way where you're skeptical of it being safe.
[01:01:08]
The only problem is then everyone would have to install Elm concurrent task.
[01:01:13]
It would be a quite critical dependency.
[01:01:15]
True.
[01:01:18]
But it is interesting. Like, yeah, you don't really want to be re-implementing a lot of the
[01:01:25]
very low level stuff over and over again.
[01:01:28]
Right. Exactly. Well, Andrew, if somebody wants to get started playing around with Elm concurrent
[01:01:35]
task and learning more about it,
[01:01:37]
what's a good place to do that?
[01:01:39]
Because if you search on the Elm package website, Elm concurrent task,
[01:01:43]
that should point you in the right direction. The readme has some instructions on
[01:01:49]
installing the Elm package and the NPM package. And yeah, if you've got, please get in touch
[01:01:56]
if anything is unclear, like I'm available on Slack or GitHub.
[01:02:00]
Wonderful. Yeah. And I definitely recommend there are some really nice examples there as well that
[01:02:06]
are worth checking out. So yeah.
[01:02:07]
Wonderful. Well, thanks again for coming on the show, Andrew. It was a pleasure having you.
[01:02:11]
Thanks for having me.
[01:02:12]
And Jeroen. Until next time.
[01:02:14]
Until next time.