Lamdera lets you build full-stack Elm apps with zero glue code. We discuss the philosophy and the 1.0 release with Mario Rogic.
August 30, 2021

Lamdera is about removing non-essential complexity - 6 concepts

  1. Stuff that happens for the client (in the browser)
  2. Stuff that happens in the server (like scheduled job)
  3. Data from client to backend
  4. Data from backend to client
  5. Frontend knows what it knows
  6. Backend knows what it knows


Hello, Jeroen.
Hello, Dillon.
And today we have another return guest.
We've got Mario Rajic here with us to talk about the 1.0 release of Lamdera.
Mario, thanks so much for joining us.
Thanks for having me.
Excited to be here.
I've been looking forward to this one really since we started the podcast.
Yeah, the Lamdera is just one of those things in the Elm space for me that makes me go like,
that is why Elm is amazing.
Because the purity of Elm lets you do mad science like this.
And it's actually not that surprising.
It's just pure functions.
So I feel like Lamdera is one of these really cool things that seems like it should be,
it's such a big promise that it seems like it should be difficult to wrap your head around
and complicated.
But I think that it's actually deceptively simple.
And once you realize how simple it is, then you understand it.
So can you try to give us your distilled down version of what is Lamdera at its core?
And why is it interesting for developing applications?
No pressure.
Yeah, I think maybe I have to start with the...
I'll try to start with a concise version because the long version is probably quite long.
And the concise version might not be that concise.
I guess in short, the way that I think about Lamdera and I think the way that I pitch it
is Lamdera is about cutting out complexity.
I feel like it's just a really loaded way to explain that because I think every single
package that ever existed and every single CMS tried to make the same claim, right?
And so when I'm trying to say that with Lamdera, maybe an isolated is to say cutting out nonessential
And when I say nonessential, I mean that maybe in a slightly more formal sense to say, well,
I spoke about this concept and there's probably a more eloquent explanation through the conference
talk that I gave on Europe announcing Lamdera.
But basically, when I talk about the essential complexity and I'm thinking about essential
complexity in a web app specifically, like a full stack web app, the way I think of it
in my head is there's stuff that happens for the client in the front end, stuff that's
happening in their browser.
And then there's stuff that happens on the server in the backend, things that only happen
there, maybe like a scheduled job, things that alter its state.
And then there's information that goes from the client to the backend, just at a conceptual
level, like there's data that's passed that way.
And then there's information that goes from the backend to the client.
So they're like the four kind of things.
And so they're the four things that happen, I would argue.
And then there's two sets of knowledge that exist, right?
Like the client knows what it knows and then the backend knows what it knows.
So those kind of six concepts.
And it took a really long time for me to get to this level of thinking.
There was a whole set of iterations in between that had all sorts of other moving components.
But as I kind of refined and figured out that the complexity thing was the thing I was trying
to solve, it just kind of became clearer and clearer that these were the things that you
could boil it down to.
And I don't think you get any simpler than that.
I haven't figured it out anyway.
There is an argument to say that maybe what goes to the client...
Sorry, maybe the information that goes between the two could be considered as one thing if
it was like a request response dynamic.
But I argue not really because the backend has to know how to respond.
So fundamentally, I'm saying like the backend has to have...
Like both of them have to have knowledge of the thing that's going back and forth between
them in both directions, right?
I have to know what you're going to ask me, but I have to also know concretely what I'm
going to respond with and that that's something reasonable for you to handle, right?
I can ignore it and make it your responsibility, but either way, it's an implicit contract
between the two.
So what is Lambda?
Lambda is Elm applied to those core principles and that's it, right?
The idea is that when you build your app, you literally build it just in terms of those
principles and nothing else.
So when I say nothing else, I mean there's literally nothing else.
So it's not like, oh, well, we're going to communicate with HTTP over a JSON thing.
So let's drag in all the HTTP semantics.
Let's drag in the notion of failure.
Let's drag in the notion of HTTP status codes.
Let's drag in the notion of streaming or HTTP2, like the protocol levels.
Let's drag in the notion of whether we're using JSON schema, like how we're transforming.
None of those things.
The idea is just be like, look, do you want to send some stuff to the backend?
Here's a send to backend.
Give it a Elm value.
Like that's it.
Lambda does the rest.
Do you want to send some stuff to the front end?
So you send it to the front end.
All you need is a client ID to know who you're sending it to, and you give it an Elm value.
And that Elm value just materializes in the front end.
And then the same thing, perhaps more extremely, I think this is where lots of people get confused.
I say, you know, Lambda has no database, because that's another external kind of thing.
So instead, how does it work?
Well, it works the same way if you understand how an Elm program works on the front end.
You have an update function.
You process a message, and then you return a new model.
You have the existing model, you return a new model.
And so my thinking was, well, what if it just worked exactly the same way in the backend?
There's no database.
It's just the model is your database.
It's your source of truth.
Let's drop the word database for a moment and just think about like, what is your data
full stop?
I don't know how good an explanation that is, but hopefully that gets some words to
I feel like we're missing one thing is like, what is Lambda used for?
I feel like we missed the explanation around Lambda is a pair of front end and backend
systems, all merged into one and made very simple.
So I would say that Lambda is a full stack platform for building delightful web apps.
Meaning that, yeah, you write Elm in a single code base, where everything type checks together,
your front end and your backend.
And then when you deploy, Lambda basically splits your front end and splits your backend,
provides all of the infrastructure and the deployment pipeline and the hosting to then
make that work in production as a full stack app where the client gets the normal kind
of webpage loaded, the backend exists separately, the backend has persistence and it functions
like you would expect a full stack app to function, but it's just Elm.
It's purely end to end.
And then, you know, thus shunts some of the kind of Elm expectations you have.
So no runtime errors in practice.
If you change your types, you know, if you change your types in the backend, you'll get
compiler errors for the front end.
Yeah, all that kind of really nice stuff from Elm.
It's essentially like, what if communicating with the backend was not an HTTP request command,
but rather just a message?
You just send data and it automatically knows how to serialize it for you under the hood.
And so, like, you know, I hear a lot of talk these days about like this backlash against
microservice architectures that actually like you're taking this thing that you had this
easy to understand system, this nice co located system where it had these background jobs
and these endpoints, and then suddenly you've turned it into something that's difficult
to understand because it turns into like a Rube Goldberg machine.
So to me, it feels like lambdera is a monolith, but it's like not just horizontally, but vertically,
it's a full stack monolith.
And what would happen, what simplicity would emerge from collapsing together the front
end and backend code into one code base?
So not only do they share code, not only are they written in the same language, but it
is actually the same code.
It shares the same types.
And basically like actually from reading your docs on the lambdera dashboard, which there's
like a lot of really good insights in there, like the big headline for me seems to be,
hey, we spend so much time writing glue code.
And also we have such a fear of changing things because of glue code.
And in fact, you know, Elm makes it easier to deal with glue code because we feel that
it's more safe to do so.
And things like Elm GraphQL can help give you confidence there, types without borders,
you know, but what if we didn't even need that?
Because that's still a layer of things where things can go wrong, where there's extra work,
where there's our cognitive load is going to those problems, which aren't actually problems.
Or you also need to learn new technologies like GraphQL, a new backend language.
All that stuff.
Yeah, exactly.
You try to get a more elegant ORM for updating things in the database and doing SQL queries
in a higher level way.
What if it was just declarative and there was no glue code?
Like literally it's just like, to me, lambdera is the answer to the question, like, could
you use Elm to completely declaratively write things and eliminate glue code?
So there's no, you just declaratively say, I have this data in the backend, this data
in the front end, this is how they change.
And then all the glue code emerges from that declarative description.
So glue code I've found is really, I found it really difficult to talk about glue code
because it may mean different things to different people.
And I know you guys like definitions.
Maybe we go on a little tangent with definition of glue code.
I really struggle with this.
I feel like this is a thing that should be exist and should be formally described somewhere
and it'd be awesome.
Maybe by virtue of doing this podcast, someone will message me finally and be like, Hey,
it's called blah, blah, blah.
And you completely missed it.
But I've been struggling to find a term for this.
I just had to invent one, like the whole Ludwig Wittgenstein thing, right?
The limit of my language is the limit of my world.
So I'm like, okay, so I've been calling it semantic boundaries because as I was trying
to figure out this complexity thing, I was more and more stumbling across little things.
And then eventually I felt like I got enough things and it clicked what the source of the
complexity that I'm talking about was.
So I think essential complexity is hard to deal with because it basically means whatever
your business rules are doing and how they change.
That's a whole skill set of a developer.
You applying your technical kind of mind against your business stuff.
And there's just so many reasons why that can go wrong.
Like there can be political reasons or process reasons or things change over time.
And suddenly you realize the system that you built on certain assumptions is gone.
And that's like, I'm not talking about that kind of complexity, right?
That's how I don't think that's solvable.
And I think that's our job, right?
Like in perpetuity.
The complexity I'm talking about, I started now to recognize as what I call semantic boundaries.
And now that I've called it that and figured out what it means, suddenly I can't unsee
it anywhere.
And it's really upsetting.
And so what it is basically is when I say semantic boundaries, I'm talking about any
boundary you have where on one side of the fence, the set of things for how you describe
how that thing works is different in any way.
Even like if it's really subtly different from something on the other side of the fence.
So like one I would give is say you're going to make a HTTP request, right?
There's at least two really clear boundaries for me there.
First thing you're going to be, like usually, at least in my thinking in a language, I'm
going to be like, okay, cool.
What am I going to use as a data layer?
And like what encoding am I going to use?
Say normally you reach for JSON, right?
JSON is usually going to be either if you're lucky a subset, but probably not.
Probably it's like a Venn diagram overlap with whatever the thing is that you're encoding,
So it's like an integer, yeah, okay, we can put that straight into JSON.
A float probably, a dictionary, okay, well now I have to choose how I'm transforming
this dictionary into a representation in JSON, right?
That is glue code, right?
Because that is code that is entirely, entirely about converting between the semantics of
those two things, right?
Yeah, it doesn't, I believe it doesn't serve any value other than that, and maybe my position
on why I feel so strongly about that will become clearer as we go on.
So let's say then now you're taking this JSON and you're putting it into HTTP.
That like more subtly is a semantic boundary, right?
Because you say, well, what can they represent?
Well, like everything, right?
It doesn't matter.
HTTP doesn't care.
You put whatever you want in the payload.
Well, no, because JSON doesn't have the concept of a 404, right?
There's no like, if you have your JSON blob, there's no like, oh, I read my JSON file from
disk and I got a 500 internal server.
You know, like that doesn't happen.
That only happens when you go through HTTP.
So what you're dragging in is you're dragging in all of the HTTP semantics now, right?
They're on top.
And so even though you're not explicitly formatting your JSON, the boundary there is that you
have to handle these semantics in your program now, right?
Your program could get a 404.
It could get a 500.
It could get a 382.
What's a 382, Dillon?
No idea.
I have to go to Google, have to find that out.
Why is this crashing and bugs now?
You know, like it's, it's, it's, this is something six months from now, you're going to be like,
why, why did this happen?
And so I, I, yeah, on the LameDare kind of features page, I have this big diagram that
tries to show that with a full, like a classical full stop app.
And so when I'm talking about glue code, that's what I'm talking about.
It's not glue code, like, oh, you know, we have to access, like, we just have no choice.
We have to access a third party, a HTTP API, you know, and like, I have to write code for
I'm not being like kill all third party.
You know, I'm not saying like, oh, you shouldn't do anything.
I'm saying that's, that's essential to me.
Cause you have to talk to that person, right?
Where it's, I think not essential is when you're doing it without your system.
And then, you know, we get a lot of people that will ask me like, oh, well, you know,
what about like, you know, if some third party wants to query stuff, like now that format's
closed and you can't get, I'm like, yeah, those things are true.
And if that's what you need, then it's no longer glue code, right?
If that is like concretely part of your requirements that you have to serve an API to someone,
well then you got to, you're probably forced to take on those problems.
And then, you know, it becomes a political thing.
If you can't like in your company, if you can't influence that third party to do it
a certain way, or do something, you know, they demand it, or maybe it's contractual
or whatever it is.
Well, that's fine.
We can use HTTP and JSON.
Those things exist and, you know, we have to do them.
But really like the root, like, so if we kind of boil down to the root of where Lambda started
and what kind of goal I keep my eyes kind of honed on is I wanted to see what it was
like to have a system where you could cut out all of that essential complexity and what
that would look like.
And the inspiration really for that was Elm, you know, like Evan made a really bold thing
by being like, what would it look like if we didn't have inline JavaScript?
A lot of people seem to be like, that's crazy, you can never do it.
Like, we need FFI.
And I'm like, no, maybe you don't.
What would happen if you didn't?
What would it be like?
And that's like, that's the guiding stuff for Lambda at the moment.
Make a bold assumption and see what follows from that.
And yeah, Lambda really feels like that.
And it makes me think of like, I can't remember if you've mentioned this analogy or if it's
from the back of my brain or what, but it's like a game of telephone, this glue code you're
talking about, maybe Yorun's moving his eyes in such a way that maybe he doesn't know what
the game of telephone is.
Or maybe you don't either, Mario.
No, I don't either.
Okay, good.
Then that gives me the opportunity to explain the analogy.
So it's a game of telephone, maybe it's an American thing, is something, you know, children
play in school where, you know, you've got people lined up in a row, and you whisper
something into the person's ear.
So the first person whispers something into their ear, it's a message, and then the next
person whispers it into the next person's ear, and so on.
And the message continues around and comes all the way back to the original person who
said the message.
And inevitably, it gets lost in translation somewhere along the way, there's a loss of
information and it's like an emergent system, you get this, like completely absurd, bizarre
message, and that's what it feels like writing glue code sometimes, as it's just, you know,
as you said, like, it's the lowest common denominator, you've got to communicate, like
the semantics of what represents a POSIX number is now an int.
And what represents parse don't validate type that represents the validations and the guarantees
of it and all of this is an object with strings and ints.
And no guarantees and you know, there are some things like custom scalars and GraphQL
that can help with that.
But still, like, it's a contract, you need to ensure that both sides are honoring that
But what if you didn't, because there weren't two sides, it was just one, it was just one
monolith where you just use it, and you're not going through those translation layers,
and you're not playing the game of telephone.
So it's huge, it's the, you know, benefits of that are huge.
So fun, fun side note for listeners in UK and Australian English, this game is called
Chinese whispers.
I just looked it up.
All right.
Yeah, there's a Wikipedia.
So you're familiar with that game?
You're like, this rings a bell?
Yeah, absolutely.
In French, it's Arabic telephone.
Arabic telephone.
How interesting.
And by the way, it's a very fun game.
If you do it with mimes.
That sounds extremely French.
That's great.
Pulling a rope.
Does everyone have to get out of the box dressed as a mime in the black and white?
This sounds incredible.
I would watch a YouTube video of this.
I know what I'm doing when we finish this recording.
But yeah, that you mentioned before the microservices thing.
I just find that really funny because I feel like by accident, I've been trying so many
things working towards Lambera over the years of my career, but I think I've been doing
it without realizing where I was wanting to go.
And yeah, like that each time it was like the new CNS and the new framework, I was like,
oh, this is great.
This is going to be the one.
Like it sounds good.
And then you get into the depths of it and then you just end up being so disappointed.
And yeah, the last big one was microservices.
I just remember being so excited.
I was like, ah, it makes sense now.
When I heard about the concept, I understood it and I read all like Martin Fowler stuff
about it.
I'm like, this makes sense.
This is how we manage complexity.
And my last job, yeah, we started doing that.
We started building a bunch of services and we went running with it.
And now in hindsight, I find it hilarious.
At the time, I was working in Melbourne, Australia back then and REA, Real Estate Australia,
was like our version of like a Spotify company.
They were doing really cool engineering.
They were leading the field and lots of talented people were going there.
And one of the talented engineers there, she'd made this library called Pact.
We were using it for Ruby.
And basically it was like, yeah, you build up all your microservices and then you use
Pact to basically generate like a schema definition of what your service provides.
And then you would use Pact on the client sides as well to generate schemas for what
they consume.
And then you would put Pact into your build pipelines and basically Pact would pull in
your other repositories and basically type check that the contract you have is the contract
they expect.
So when you change your APIs and stuff in the future, you could get like a build failure
to be like, no, no, you can't deprecate that because so and so team kind of uses it.
And now looking back, I'm like, my God, all we did was break up our monolith into a distributed
monolith and then try to add all these like second class type checking things back.
And I was so disappointed.
I'm like, it doesn't help at all.
We tried to remove complexity by adding a whole new level of complexity.
I mean, I don't know how much you guys know about distributed systems, but pretty much
everything I read is like, don't.
They're really hard.
And it's like, you got to learn about cap theorem now.
I'm like, no, I just I wanted this service to scale.
I didn't want cap theorem.
I just, what?
And so yeah, I think that's really kind of influenced where I've ended up with Lambda
or I'm kind of like, yes, yes.
I'm not saying like throw those things away.
I'm not saying Lambda is a silver bullet.
But man, just like 95% of the stuff I've worked on did not need that stuff.
We needed a better way to deal with complexity, not like a better way to scale those apps.
Scale wasn't a problem.
It wasn't the primary problem anyway.
So yeah, Lambda is kind of like really honing in on that, trying to be like, okay, what
would that, what would be a more delightful way?
Is there a more delightful way to deal with that complexity, to express your essential
And then when you have scaling problems, you deal with that particular thing.
And I think that it's easy to forget with these really sleek solutions that there are
things that can go wrong with these layers.
And like with Elm pages, at one point people were asking like, oh, is Elm pages going to
have like a graph QL layer where you can like query for data that you've gathered like Gatsby
esque and I was like, that is absolutely not my goal because like Elm has a type system
and I just want you to use that when you're getting data.
I just want you to get data in Elm.
I don't want you to have to go through another hop and another translation layer and lowest
common denominator.
Cause people forget cause graph QL is such a lovely way of doing that, of solving that
Like I need to use the GitHub API, it's like, wow, this is great.
I can see the documentation there.
I can see what data I can get.
I can type check it.
It makes sense.
But if you don't need that, it's better to avoid it altogether.
So you know, just removing those translation layers is always a good thing.
Even the most elegant translation layer and the one that enforces the contract and is
well typed and such, you're still, it's lossy.
You're still losing type information and guarantees and creating new contracts in the process.
And there's a lot of implicit stuff there as well, right?
I think we all know Elm quite well and have used it and really comfortable with it.
So some of the stuff that like maybe newcomers or people entirely new, like they may hear
a lot of the words that we use and be like, yeah, my language has that.
I don't really understand, you know, like Haskell has that.
What's the difference?
I think the difference is like when you actually use it and you feel the ergonomics, like it's
not just the fact that Elm has that type safety.
It's just that it's instant, right?
Like it's fast.
Like, I mean, one of the banes of having to build a platform is you'd inevitably never
get to use the platform, right?
So I feel like 90% of my time is writing Haskell in the Elm compiler, right?
Like that's where most of Lundera's stuff is, right?
It's Haskell.
And it's just not nice.
Like I love Haskell.
It has so many cool things and it was my gateway drug into FB, but man, I can like, you know,
waiting 30, 40 seconds just to type check things.
Like, yeah, it's so understated how like Evan's focus on making that performance and that
speed really fast, it just changes.
It's just not, you know, fast type inference and slow type inference may as well to be
two completely different things in my eyes, right?
Like just the way that you build things and the way that you approach stuff when you have
it be that quick is completely different.
And I found the way that I approach building full stack apps with Lundera as a result is
just completely different, right?
Like the confidence that you have to charge into making wide sweeping changes and stuff
like that.
It's just a lot of the same stuff that people write about for Elm, right?
And that's the part that I really enjoy when I get the chance to do it.
You have to live vicariously through Martin Stewart or people with the first name Martin.
I'm so jealous of the Martins.
They build such cool stuff and I'm like, oh, cool.
I'm just ice cream vendor after what you eat it.
So, so let's, so many, so many great things to talk about here, but I want to make sure
that we paint a concrete picture a little bit with like, what are, what are the actual,
you know, tools at your disposal writing Lundera?
Like for example, send to backend and types.elm and you know, the, the init and an update
and subscriptions and update from, from front end in the backend.elm module.
So like, let's, let's dig into that a little bit.
Like, you know, let's say, I mean, for example, let's just say we're building a chat, real
time chat application and we haven't really mentioned yet, but it's, it's real time.
So if like building like a chat application is almost like a hello world in Lundera, right?
Because it's, it's so trivial to do real time data.
I mean, it's, it's literally the hello world.
It's one of the canonical examples in the, a very simplified chat without much concern
for things like reconnecting or messages dropping stuff like that.
But, but still, yes, it's, it's like, I think it's like 200 lines of code where 120 lines
is, is the UI and, and yeah, it's the, the, the reason it's so simple is because if we
like, so I'll lay at the risk of laboring the point, bring it back to like those core
So if we just forget Lundera for a second, say, all right, let's design a system talking
about the, the six core principles we had before, right?
There's what the backend knows, there's what the front end knows.
There's messages that happen on the front end.
There's messages that happen on the backend.
There's data going to the backend and coming from the backend.
So let's design a chat app just talking about these principles.
And if we do that, it'll take a couple of minutes and we've basically designed the code
that's there.
So how do we design it?
Let's start with the front end, right?
I would say, okay, we build a UI, we build a box, we build a text input.
Let's say we were going to get slightly fancy, right?
Slightly fancy.
So every time I type in my message and I hit submit, like that's a front end message in
L right?
Hitting, you know, the chat text submitted, right?
And so my update function, I'm handling that.
And let's say optimistically, I just add it straight into my model, right?
So I'm going to say, like, I'm just going to assume that this message is going to be
in the chat in my model.
I have, let's say, let's just be really basic.
Say I have a list of string, right?
A list of messages.
Let's forget authors and stuff.
We can, we can expand it.
I have to say we have a list of string.
So I add it to my list of string.
So that's how I change my model.
And then I have in my command, a center backend, you know, a new message.
And then let's say, let's go record, right?
I'm going to have a record or I'm going to say username, string, text, string, right?
So I hit save on this and then immediately I get a compiler error, right?
Which says, well, what this, this value that you're trying to send to the backend, this
doesn't exist.
So I go into my types.L where all of the core types live.
And then in my two backend custom type, I put a message, you know, chat, I don't know
what we're going to say, chat message submitted.
So now I hit save and now immediately I get another compiler error, right?
The compiler's being like, cool, you've said that, you know, you expect this message, but
the, the, the, the update from front end handler in your backend doesn't have this in the case
So I go there and I add the case statement.
So this message has come in.
It's a record with the name of the user and the text.
What do I do with it?
Well, let's say again, naively in the backend, let's say I just have a list of string.
So I just append it to the string.
And then for good measure, let's say that just really naively, I will send the entire
message history to the front end.
That's kind of dumb.
It's not optimal.
Let's just assume and better yet, let's just, let's just use the, the other primitive.
So instead of send to front end, which requires a client ID, let's do lambda error dot broadcast,
which sends to everybody.
So I do lambda dot broadcast and then the, the school, the constructor of that custom
type, all messages.
And I give it the list of all messages.
All right.
Now I get a new error that says, okay, well this doesn't exist in your types.
I go back into the types.
There's a two front end type.
So I go, okay, fine.
I need the, you know, all messages to front end message.
Save that new compiler error.
Your update from backend in the front end is saying you don't cover this message.
Go there, implement it.
And what do we do?
Let's just say naively, every time we get a full list of messages from the backend,
let's just replace our model wholesale.
Let's just completely replace the list of messages.
We hit save that compiles and we're done.
If we deployed this right now as it is and multiple, multiple clients open, open that
messages stream.
If one type something that's going to get sent to the backend, backend saves it, sends
the whole model to everybody.
That message is available for everybody.
If the other person said something, the same thing's going to happen.
Yes, potentially we, we have a slight race condition where if two of them types a message
at the exact same time, that might very, very slightly show in different orders on their
front end.
But a second later, the backend is going to send all of the messages.
So all of the ordering and it's just going to get fixed.
So really, really naively, really, really naively, the only code we've written was the
code that had to do with the business logic of what we, this, this, you know, stupid,
simple chap that we wanted to implement.
And yeah, just by virtue of how those kind of six principles break down, right?
Like the four message types and the two, two model types chat fits really nicely to that
by accident.
So yeah, that's the, that's the design it yourself without code.
Landera version one example.
So, so now the, the cool thing is you've, you've built this naive chat app that doesn't
have users or any of those basics just to get something up.
But, but now it's a, it's a type safe Elm application.
You've got your, your front end Elm, your backend Elm, your types Elm, which has the,
the two backend to front end messages which are just message types.
Like we're used to in a vanilla Elm app, except they're the messages like you described in
those, those six different areas.
So now if we say, I don't want this to be a list of string anymore, I want to have the
user IDs associated with them.
And so now you say this, this sent to backend message that you had that was sending a string
for the new message in the chat.
It's now a record and it has the user ID and the and the message.
Although I guess you already have the session ID that you're receiving from that user.
So you don't even necessarily need to change that.
But let's, let's say you're adding some, some bit of metadata to the chat somehow you're
or, or you're creating a new, a new message like you know, if the if they change their
status or something like that.
But, but it's an Elm application, which means when you change things, the Elm compiler is
going to walk you through how you need to change things.
So let's talk about that process a little because there are some ways that that's going
to be a familiar experience.
And then some ways that that Landera introduces some, some new concepts that are like really
powerful features that are going to feel like a familiar experience, but like the, the migrations,
migrating these, these messages.
Yeah, absolutely.
So you've touched on two things there.
So the first part of it is like changing that code.
And my goal is to make that as, as close to or basically identical to the Elm refactoring
experience as possible.
That's that's what it I just love that experience.
I think it's the nicest refactoring experience I've ever seen in my career.
And so my goal with Landera is to be like that, like that there's no reason to not preserve
And that kind of extends to so on two versions ago, I did a lot of work trying to kind of
distill the basically like I started initially with an assumption that Landera wouldn't be
backwards compatible with Elm.
So some choices were made and I had separate caches and stuff and it turned out to cause
issues with ID and tooling.
So a couple of versions ago, I spent a lot of time trying to actually reconcile that
So Landera is not, it's not forwards compatible with Elm right now, but it's backwards compatible.
What that means is like, if you've had an Elm project already and you've been working
on it and you install Landera, you have to run a Lambda reset command initially just
to bust out the caches and let Landera, the Landera Elm compiler build all your caches
for you because it's some of the stuff that we do under the hood.
But once that's done, it's backwards compatible.
So you can go back to all of your Elm projects and they'll all work fine.
Like they'll work fine with the Landera caches, just Landera won't work with the vanilla Elm
So the idea there was like, yeah, now as far as I'm aware, all of the Elm tooling, like
so the IDE tooling, whether you want to use language server or whatever it is, like once
Landera has compiled stuff and the caches are there, that IDE experience should be the
So like everything that you used to with Elm, or if you really like using IDE stuff, like
rename all symbols or extract let closures to the functions or whatever you're doing,
like renaming types, that should be as normal.
And then there should be nothing new there.
So then the question we get is like, okay, say we got really excited.
We deployed our naive app, a whole bunch of people have been chatting in it and it's really
successful and we're like, oh man, like now we want to add all these new features, right?
The types of change concretely and what do we do with this production data, right?
Like it's in the wrong shape, right?
Throw it away.
Sort of in database and...
Landera's philosophy is all data is transient, just throw it away.
No, no, not at all.
So this is probably the most complex part of Landera at the moment and the stuff that's
maybe the least intuitive.
I feel like once you get the core premise, it's pretty intuitive, but the implementation
is still pretty tricky.
But basically the core idea is, I mean, it's pretty boring.
It's like, how do we have something in shape A and we have something in shape B, like how
do we get from A to B?
It's just the kind of like stock standard elements.
How do you do this?
What about this?
So yeah, essentially all that evergreen, the migration system of Landera is, I was like,
I want to change from that shape that is in production to the shape that I have now, like
the new shape of types that I've changed.
Now I've got usernames and we've got timestamps and we've got message colors and whatever
we want to have, right?
I'm like, how do I deploy this new version and have all that sorted?
And yet the answer is you write a function.
Landera kind of auto generates some placeholders for you and goes, look, I've got this old
type, it's this shape, and then you've got this new type, it's this shape.
I need a function from that type.
That is old value.
Like, do what you want.
Just write a conversion.
If you can write a conversion and it type checks, then I'm like, okay, cool.
Like Landera is going to say, well, I will now guarantee to you that if this thing that
you're saying you're going to deploy, that's going to deploy, right?
Like I've type checked it against production.
You told me how to move all those values between one to the other.
And I have checked in production that those types are still the types that you're saying.
So as at this point, if you hit deploy right now, I'm going to guarantee that all of that
makes sense.
So the insight there that I realized eventually was like, that's what I was missing on all
the projects that I've worked on before.
It's like, no matter what kind of guarantees you have your app, what we're missing is guarantees
between apps.
And I think the closest I've seen to that or the language that I've seen do it the nicest
probably is Rails.
They've got active record migrations and there's this notion of being able to roll forward
and roll backwards.
Like a lot of the default types, if you say, rename this table in the background, Rails
is like figuring out that renaming it forward would mean renaming it backwards if you rolled
it back.
And you could test that locally and write tests for it.
But even still, it's not type safe.
We've definitely had experiences in the past where you have a bad migration in production
and it forgot that some column could actually be null or whatever and then just the whole
thing blows up.
And so my goal with the evergreen migrations in Lamdero was to not have that happen.
And there's an important feature that Lamdero has that with Rails you wouldn't get, which
is like Rails gives you the ability to do database migrations.
But if you're rolling out a new version where the API is not compatible, there's no notion
of that.
You're on your own.
You have to build your own conventions and contracts for that, which there are ways to
do that, but you're on your own doing that and there's nothing to protect you from doing
something incorrectly there.
But with Lamdero, the contract is not like a REST API like in Rails.
And then if you change the contract in the API, you have to whatever do API versioning
and create new APIs and still be backwards compatible if you want to be or whatever in
But you could be in transit receiving a message from an old version and then the messages
you expect change.
And for that, you've got a way to migrate these old incoming messages.
So you can like, well, I think you call it like hot reloading the backend, right?
So it kind of just shakes out of the same design.
It's nothing really special or different.
So yeah, the edge case that you're mentioning there with a lot of classical apps is like,
when I say classical, I mean like the current kind of full stack thing that you would expect.
There's a server in some language, a front end in some language, you've got some sort
of protocol in between.
And yeah, I spoke about this as kind of one of the opening motivators for my evergreen
talk, which is like, you deploy the version two of your backend, and then for some reason,
the version two of the front end is maybe slightly off, where there's a lot of traffic
and there's a lot of traffic in flight.
And I think our normal answer in this is why I feel like the concept of the semantic boundary
stuff is so nice, because now we can say, what we normally do is we go, you know what,
let's completely avoid the fact that there's semantic boundaries here.
Let's not talk about that complexity at all.
Instead, let's just throw the baby out with the bathwater and be like, you know what,
industry best practice is that everything's always backwards compatible.
And I'm like, that's like, yeah, that's a solution.
But it's pretty intense, right?
That really limits now how you're going to approach your thinking and your nervousness
and your caution about making really wide sweeping changes.
And don't get me wrong, if you're Google, this message is not for you, right?
I don't know what you guys are dealing with.
You're dealing with some crazy stuff and that's fine.
But it's specialists, right?
If I'm dealing on a website with like 1,000, 10,000 users even, it doesn't have to be like
There are some other trade offs.
So yeah, to answer your question, how do we do this live reload?
Well, to change your backend model types, we had type A, we had type B, we wanted to
move between the two.
Messages are the same, right?
You had your old in flight message type and you got your new in flight message type.
If your backend is just upgraded to version two and it receives a version one message
type, what do we do with it?
Exactly the same thing.
You write a function from A to B only if it's changed, right?
Lambda only forces you to write migrations for the types that have changed, right?
So most of the time you don't have to write any changes for your core types.
But yeah, let's say your backend messages changed.
And usually messages are a lot easier to migrate because usually it's really something like
Like for example, you've removed a message, right?
Like you've removed a message variant from your old message.
So if it comes into your new backend, well, what does it mean?
Let's say we've removed the feature to alert of chat room joins, right?
So before every time someone joined the backend, we get a message of joined user or whatever.
Let's say we've removed that, right?
And so we deploy and as we deploy, let's say like hundreds of users were joining, right?
So the backend upgrades to version two and suddenly gets hit with like a hundred of these
like version one user joined messages.
Well, you write a migration function and in the migration function you decide how to handle
it, right?
And so one way to handle it is to be like, well, just drop it on the floor.
We'll do nothing, right?
We don't have this feature anymore.
You know, map this migration to the value ignore, right?
And then Lambda will be like, cool, okay, I'm not going to do anything with the message.
I'm going to drop it on the floor.
Or you can say, well, actually, you know, we don't have user joined indicators, but
you know, maybe for analytics, like we still do something in the backend.
I don't know, right?
Like maybe, maybe we have some different feature where actually the user now subscribes to
join notifications, right?
So then in your migration, you'd say, okay, well, whenever I get this old message value,
I'm actually going to fire this new message value, which is like, you know, process for
user channel joined subscriptions or whatever, right?
And then that will get, once that migration is done, Lambda will be like, cool, I'm going
to give this now to your backend update function and let it handle like, you know, the result
of the migration of this message.
And again, the really root goal here is just that like when you write this stuff and it
compiles, like it works, like it's guaranteed.
Like that's the contract with the compiler that it'll guide you along and hold your hand
for like all these kinds of things.
And then once you're done, it's like, cool, because you've been, you know, because you
follow that API and you've been nice and constrained.
Now I know that I can go and do all this stuff for you because it's, you know, it's very,
very clear.
There's no side effects in there.
I know you're not calling JavaScript.
I know there's no like weird state to handle otherwise.
Like the design of Elm allows for us to try this kind of rather different approach to
doing holistic type safe migrations between systems.
And there is also handling the hosting of the application, right?
Is you could have done Lambda not handle that part.
Was the evergreen just like the, do you need to do hosting for evergreen to work?
Is that the primary factor?
So I think, so if we were to talk about like kind of what feature led to Lambda becoming
like a full platform, like more of a SaaS thing.
Yes, that's it.
So the features, yeah, the features that kind of force that is when you say, like when you
say, Hey, Lander, deploy this new version.
Like Lander needs to be like, okay, when I deploy versions, you know, I do a check for
you to make sure that like everything that you're about to deploy, deploy makes sense
to the types that are already deployed in production.
And so we go, okay, fine.
How do we get the types that are deployed in production?
It's like, well, we need probably, you know, some API there and that has rich information
in it.
And then it's like, well, like that then leads to being like, okay, well there's the concept
of apps, right?
And apps have name, like which app are you talking about right now?
Now suddenly we have app naming, right?
And so you don't want to have collision with app naming.
So you need a system that's tracking up.
And then we haven't talked about it yet, but you know, like the compiler now also does
the, we have the secret config or just the config, environmental config, right?
Type safe environmental config.
And part of that is secrets, right?
Which are like, which are things that you don't want to ship into your front end, right?
So the compiler can additionally needs to be like, okay, I'm going to check all the
secrets you use in your code.
But obviously when we deploy, we need to inject those values, right?
So I also need an API to talk to that contains all those values.
And you know, your secret key, you probably shouldn't be accessible to everybody.
So now we have the concept of accounts, right?
I need to know, well, which apps that I'm keeping track of belong to which account?
Do you have authentication for that?
So now we've got the Lambda login functionality where the CLI authenticates, you know, so
like slowly, little by little, it just became clear that like, there wasn't, there's no,
there's no nice non service way to do this stuff, right?
Like in order to know these things, there's a bit of a, there's like a coupling between
like the code and the process that you do locally and your live production system, like
the config it has, what kind of state it's in, whether past deploys have failed, you
know, those two kind of together necessitated moving that direction.
And so as I moved in that direction, it just became clear like, well, actually, if we do,
let's say we do bring hosting and deployment into the picture, some things actually get
much nicer and easier, right?
Rather than being like, oh, you know, does Lendera do Docker or I don't know, like what's
popular Kubernetes, I guess that's cool these days, but no, there's that blog post that
says it isn't cool and we should, and it's, I just kind of, I was like, ah, you know,
it like, going back to the like root thing, like what, what cuts out the nonessential
complexity, what makes all of that nice?
And that kind of naturally led me to kind of Lendera being a service.
Does that mean it can't be like a self install thing in the future?
No, I don't think so.
It probably just means packaging up like all that service infrastructure and having it
available in some way.
But like right now, that's not where the value lies, right?
Now the value lies in kind of nailing those complexity things.
So kind of that's what I'm pursuing and that's, I guess that's why things are the way they
So I think most of our listeners are going to be on board with the value proposition
of like, you know, type safety and like, what if you took that type safety to its logical
conclusion and you know, I mean, that's sort of, I mean, if you're not, then I'm impressed
that you've gotten this far listening to me and here and nerd out about type safety and
contracts and all this stuff.
It's probably people that Mario forced to listen to the podcast.
I'm sorry.
You know, so we all see this is an incredible concept and Elm is like just the perfect tool
for this and Lambdera has like, I think really nailed like the details in terms of the evergreen
migrations and the send to backend messages and all this stuff.
I think the big question for a lot of people is going to be, should I use this in my company?
And like, is it ready?
What's it ready for?
So let's talk about that a little bit.
And I know you've got some good notes on that at the Lambdera website, but let's dig into
that a little bit.
Like what should, how should someone evaluate that decision of like, I'm starting a new
Should I consider Lambdera?
Is it a viable option for my use case or not?
What are the considerations there?
So one consideration is I really admire the way that Evan thinks really deeply about experience.
And so one of the things that I thought about for a while and got feedback from a bunch
of people was like, what is the experience of evaluating new tech?
And it seems to be that the consensus is that like, you know, every project is going to
be like, if you take a look, like go look at like, you know, Nuxt.js and Next.js and
a lot of the popular frameworks, like every single one is the same.
It's like, get started in two minutes.
And it's like, what am I getting started?
What am I getting in for?
And they use a lot of like, I don't know, I feel like it's a little bit shady.
Maybe it's not, maybe I doubt people are trying to be malicious on purpose and it's just the
way our industry has evolved.
But there's always like, you know, 20,000 stars on GitHub and everyone's using this.
And like, I mean, I do the same, right?
You go to and the very first thing I put is user testimonials.
What you should start is go to slash shouldn't dash use.
And I've just like, I've done an anti marketing page to kind of just be like, here's all the
pitfalls, right?
Like, these are all the ones that I'm aware of.
If you know of any others I'm not aware of, like, let me know.
It's a weird balance, right?
Because I want to promote Landerra, but I don't want to waste your time.
Like I definitely don't want anybody, I don't want to have, I can't imagine anyone having
the experience that I used to have where you spend like, you know, three weeks diving into,
I don't know.
The only one springs to mind for me is Docker.
I got into Docker way too early and it had so many problems.
And like, you know, like I just, there was no good information about anybody.
You just had to burn time.
And I'm like, that's not what I want, right?
The whole point of Landerra was to save you tons of time.
So start with shouldn't use if any of the stuff makes you nervous, jump on the Discord
that we have, ask about it, talk about it.
Yeah, I kind of want to start from that point to be like, here are the edges, right?
Like this don't use it.
Then on the flip side, like why you should use Landerra.
I think the sweet spot right now is if you're a company that uses Elm or you're somebody
that uses Elm, and especially if you've had this feeling of like, you know, you start
on a project and you're excited for the project and then like three months later you're debugging
like HTTP decoders and dealing with migration issues and whatever.
And like you have that experience of like getting bogged down and like the project gets
less and less fun to work on.
That I think like if you give Landerra a go for that instead, I think you'll find what
I found and what other people seem to be saying is like that feeling isn't there, right?
Like that weird slow down, the bogging down, the complexity kind of blowing out, like it
just doesn't seem to manifest in the same way, right?
And that ends up resulting in like a really delightful feeling, right?
And from a really pragmatic point of view, I'd say like if you value your time, then
it's much clearer once you've experienced that to be like, holy crap, like there's just
so much time I didn't have to spend on stuff that didn't help me with my core goal.
And yeah, like all the, I mean, it's probably too long to go into all the reasons you shouldn't
use Landerra, but like I tried to lay them out clearly there to give you an idea of like,
even though it's really easy for me to be like, oh, it'll save you a ton of time.
If there's like a roadblock you're going to run into later, like it's kind of pointless,
But the one that I think that people, the one that I will point out that I think that
people bring up a lot as a potentially big issue that I don't really think is as big
an issue as we've conditioned ourselves to think it is, is this one of scale, right?
Like people always do like, oh, how's Landerra going to scale?
And I'm like, cool, like what do you mean by scale, right?
Like probably one of the largest sites I ever worked on had peak traffic of like a thousand
requests a second.
That was with like 50, 50,000 shoppers on the site at once, right?
If you're talking about that kind of scale, I think Landerra can handle that scale fine,
I haven't post public benchmarks yet.
It's something I need to work on, but like locally I can run at about 3000 requests a
second, right?
So it's like, it's basic, I mean, it's nothing special.
It's basically in kind of Ruby JavaScript, just kind of standard unoptimized kind of
Like the real, like the real bigger question is going to be like, well, when you're talking
about the kind of complexity that you deal with, like the scale, I think, I think in
terms of impact to somebody's business or impact to somebody's time, the amount of time
and scaling issues you spend on all of that nonessential complexity, I think it'll be
more than offset by the time that you'll save in Landerra about not having to worry about
those things upfront.
Does that answer your question?
That's definitely one of the key points that I think people will wonder about.
Like, you know, it's, it's challenging as a, as a business or, you know, employee at
a company to like make these calls and, you know, there's, I mean, we all, we all relate
to this in that Elm ecosystem of like, wait a minute, but like React has a bajillion packages
and, and React has a bajillion users and Elm doesn't have a bajillion users.
Is that okay?
Is Elm dead?
Cause it didn't have a release this year.
You know, I won't rehash that, but yes, the, the, the benefits are very clear.
I really like what you're saying about scale.
I think that, you know, the, the worry is it's such a burden of proof as a, as a, an
employee choose, you know, championing a tool to say, I'm confident that we're not going
to get stuck in a dead end here.
And so I really like that you've got this like shouldn't use reasons you shouldn't use
Lambda anti marketing page.
I think that's really cool.
Another one you've got here is like restricted JavaScript, which goes into, you know, the
guarantees that Lambda gives you let, let's like dive into that one a little bit.
Yeah, sure.
So the originally also on the shouldn't use page, I also note that things change over
time, right?
So I've been kind of leaving little breadcrumbs of like restrictions that used to be there
and then I'm crossing out when they change.
So, so to your point about like businesses and confidence that we haven't talked about
it yet, but you know, I just did the Lambda version 1.0.0 release.
And the whole, the whole goal of that and the feature sets that I was working on to
release that was basically to increase the kind of productionization of Lambda and the
confidence that businesses could have in using it.
You know, before we were in alpha and I had kind of like, even though it really wasn't
that dire, I just kind of felt like I just really don't want to waste people's time.
You know, if people want to ignore the warnings and come in for the bleeding edge, perfect.
But you know, I kind of peppered all over everything.
I'm like, this is alpha, don't use it.
It's really bad.
Which is funny because then some people come back afterwards and be like, is it still really
It was never really bad.
I was just saying that your anti marketing campaign was too effective.
Yeah, but yeah, I think there's definitely people that have a perception like, oh, it's
fully experimental.
You shouldn't shouldn't touch it yet.
And so the version one release is supposed to signal like that's now, you know, my rapid
experimentation with the stuff in the core has kind of settled.
Like there hasn't been massive.
I'm confident now that those six core types and that premise is solid, like that part's
not changing.
And so yeah, like the point of doing the version one release was to be like, cool, like I'm
signaling that I'm now focusing on the production stuff, right?
Now you're not going to just arbitrarily use your data.
I wasn't willing to get that guarantee before, even though I tried to maintain it, right?
I didn't want people to lose data arbitrarily.
But you know, we had to have like those, I found a user found a bug in the wire protocol
and we had to make a major change and it was just easier to wipe data than to carry it
And so yeah, that now leads into the JavaScript stuff.
So originally that page said no JavaScript, none whatsoever anywhere.
And that is still true for the backend and exactly for the reasons that you said, right?
Not having that there means that like it's purely again, like, yeah, I like to say, yeah,
if you understand the core premise of LEMDARO about removing the complexity of the semantic
boundaries, then it makes sense why that feature is there, right?
You work your way backwards to go, okay, I've got data in production and I need to migrate
You know, how do I write this migration function in JavaScript?
You can't.
It's like, okay, how do I guarantee there's no side effects in JavaScript?
You can't.
It's just like in every answer, it kind of falls on.
So it's like, what's the answer?
We don't have it there until we can figure out a way.
However, something that started to come up a lot, you mentioned Martin.
So Martin's been an avid user and avid supporter and has written a ton of really, really cool
apps, which I kind of showcase every time I release anything.
And yeah, he's got his Elm Audio package.
And so he approached me once, he was like, well, how could we do Elm Audio?
And like working through that and thinking about it more, we realized was, well, actually
it does, it actually can make sense to have some JavaScript on the front end in a constrained
way where when we think about what a migration looks like, you can actually make sense of
it, right?
Either it doesn't hold any state of its own or if it does hold something like say, you
know, like the audio context has been initialized, you could conceive of preserving that through
an upgrade function, right?
So there could be a JavaScript upgrade function where it doesn't really update any data, but
it might just be like, oh, you know, you need a new audio handler.
I'm just going to use the previous reference because the page is already initialized or
whatever it might be.
So we started exploring this and as I started thinking about it more, it birthed something
that I've not promoted in the community yet, but it's something called Elm Package JS.
So it was this idea of like, what would it look like if we had some sort of standard
or some sort of tooling so that certain Elm packages could say, hey, like to use this,
like what we do today is, you know, every author in their own different style will be
like, hey, install this package, but need some JavaScript.
So copy this code for the JavaScript and copy this code for the ports and then copy this
into your subscriptions and then remember to send this commit, you know, and then we've
got like this whole set of steps and every project basically does the same thing.
And then there's a bunch of people that have tried to fix that with kind of Elm package
standards, but they've all like, you know, they, they will kind of go in different, different,
slightly different ways.
So I kind of did, I did like a literature review, review, if you can call it literature,
docs review, all the packages that were out there and kind of thought about this.
And yeah, on my GitHub repo on Elm dash packet, Elm dash PKG dash JS is kind of the start
of a draft where I've been thinking about this.
And so there's a draft implementation of this in Landerra with a very, very specific things.
We have constrained JavaScript that can be added in and all the ports wiring and stuff
gets auto generated for you.
So another simple example is copy to clipboard, right?
It's not, it doesn't really have state.
It's just an API that needs to be available, a migration.
It doesn't conflict with the concept of evergreen migrations.
Like there's unlikely to be a in flight copy paste, you know, that you need, even that
concept doesn't, doesn't really, doesn't really conflict.
And so yeah, so that's kind of softened.
So, so yeah, in the context of that shouldn't use page and the version one release, I guess
what I'm, what I'm inviting people to understand about Landerra now slightly differently is
to be like, it's not experimental anymore.
Yes, there are sharp edges, but like we should chat about them because things can change,
And so some of those constraints that are there, not all of them are permanent and they
can be discussed.
And so yeah, I think it's cool, cool things come out of it and I hope it's a push to the
L package JS kind of spec forward.
I think it would be really cool for it just to be a community thing.
Cause then we could have an alternative where it's, it's not about NPM, right?
This is, it's not about being like, Oh, I want to use d3.js.
How do I bundle it quickly?
It's more to be like, you know, there's some, there's some things that Elm core doesn't
cover in kernel yet, right?
This is not about doing kernel.
This is about making the ports ergonomics a little nicer and easier to use in the meantime.
And then also, you know, it just, it provides really nice case in point examples to the
core team.
We say, look, here are the, here are the 10 JavaScript usages that just keep coming up
over and over and over.
Here's how they're implemented.
People are happy with this.
I think that serves as really awesome research for someone to then consider, okay, well what
would be the maintenance burden or the cost of actually sucking this into core and offering
it natively?
I think it's just a really nice way to showcase an experiment in a standardized way rather
than being like, Oh, Hey Evan, here's seven packages that have 10 different ways to integrate
the JS.
Like you have to analyze the pros and cons like manually.
So yeah, that's the secondary effect of that exercise.
And then it's, it's still not FFI, so you preserve the guarantees within the Elm sandbox.
Cause that's the, that's the huge, I mean the, the port design in Elm versus other,
you know, languages that, you know, like, like re script reason ML formerly that, that
have this FFI concept, these, you know, binding directly, you have these guarantees that get
watered down.
Whereas, you know, with Elm, that's the whole point.
You can't water down the guarantees.
You can't trick the compiler.
You can't sneak in side effects and, and LimeDirra also relies on and is built on those guarantees.
So like so, so you can't have like backend calls to JS in LimeDirra for, to preserve
those guarantees.
But you, so I believe in, in Martin Stewart's meet down which, which is super cool, a super
cool LimeDirra app that, that now is hosting the Elm online meetups or meet downs.
You get these you know, it can send email, which is like and, and now I believe it's
doing that through calling an email service through HTTP.
Is that correct?
So one thing that I haven't noticed until recently, but a lot of confusions come up
recently with the V1 releases, people go, well, how does, how does LimeDirra, you know,
if you've got this, all this type safe stuff between the front end and back end, how do
you communicate with the outside world?
And the answer is the same way that a browser communicates with the outside world, right?
Like on, on, on the front end, when you want to do the HTTP stuff, you do a request, right?
So it's like, there's a difference between, so while the front end backend comes in LimeDirra
is type safe and strict, that's not to say that all comes into a type safe and strict,
If you want to make a HTTP request to a third party service, you can still do that from
the backend.
And in fact, that's exactly what you have to do for something like email, because usually
these email services will have a secret token.
And one, you can't call it from the front end because of cause protections.
And two, you can't call it from the front end because you don't want to expose your
token, right?
Someone's can steal your token and send it everywhere.
So one, that's why the secret config feature exists.
But two, that's how, if you're going to say how to LimeDirra app scale, like that would
be one way to say it right now.
Say, say you, you had like a data set that got too big for your backend, you could incrementally
or in one hit post that out to some other service and then use HTTP to query that service
as part of kind of the normal backend sort of functionality.
So in a way that's like an escape patch, like, and it preserves those guarantees because
it's like, yes, we're performing an HTTP request, but it doesn't pollute the purity and side
effect free quality within the LimeDirra backend, right?
Which then also means it ties into the Evergreen guarantee.
So say, say you're making a HTTP request in the backend and it's a long running request.
And so, I mean, consider doing this on whatever tech stack you use today, right?
HTTP request is in flight and then you do a new deployment.
Like what happens in your tech stack today?
I would hazard a guess.
I mean, maybe it's just me running really unprofessional setups or something, but I
would hazard a guess that most people would slightly panic, right?
They'd be like, I don't either, I don't know, or I'm not sure if you're lucky, maybe you've
got like a really nice deployment system that does rolling upgrades where you signal all
of your servers that should be stopping and then they wait until the HTTP request is done
and then they process it and new servers come up at the same.
Maybe you've got something magic that deals with that, but I reckon most people don't
I reckon most people are like, I don't know, maybe I drop it on the floor, maybe it times
You build some complex system that drains a queue and keeps it idle for a while and
then monitor it and then you migrate over and Kubernetes is involved, of course.
You go, Mario, you irresponsible developer.
What are you doing having inline HTTP?
It's absurd.
Clearly you needed Kafka and a queuing system with retries and all these things.
You've completely failed to scale and correctly build your personal blog because you're silly.
I'm like, you're right, I should have put a Kafka queue in my blog.
That's what I was missing.
At the risk of sounding condescending.
With Landerra, the nice thing is you stick with those primitives and they all boil back.
You realize, you deploy a new version, that new version takes over.
This long running in flight request lands backwards.
Sorry, lands afterwards.
It was a request you sent in version one, you upgrade in the meantime, the request from
version one comes back to find that it's a brand new app.
Oh, that's okay.
No cause for panic.
Landerra told you before you deploy that you changed something to do with this handling
and you would need a migration there.
When that message comes back, it's probably a backend message, you've got a handler for
it and you're like, okay, this is what I do with this request.
It's come back as a version one type.
What did I want to do to transform it?
If you're lucky, if nothing changed, you don't have to do anything and it just gets handled.
If something did change, you change that feature, you change the types or you change what you
expected back.
You can very clearly and explicitly say in code, this is exactly what will happen when
I get this type back.
That's nothing new.
That is literally the experience with Elm.
It's just Landerra.
For me, if I'm running a business and trying to make a business decision and evaluating
Landerra, that gives that HTTP functionality gives me a lot of peace of mind.
We should mention that this is pro functionality, right?
No, no, no, no.
The HTTP requests?
Everything is baseline.
There's nothing in the paid plans that have been released that I think restricts your
usage of the core features in a language.
I really get upset when I get some freemium tooling where the development experience locally
then gets gated by something.
The worst thing would be, say, you download some premium IDE package and you go rename
all symbols and it's like, sorry, you have 100 symbols, but until you upgrade, I can
only rename seven of them.
You're like, why?
That's not an upsell model.
That's just like a make me angry model.
Anger fueled upgrades.
Rage upgrades.
You will buy the license because of anger.
Yeah, exactly.
When I was young and nearly starting my career, that wasn't an option.
Then you're like, I guess I'll go pirated maybe.
Now I'm privileged enough to just be able to pave to make that anger go away.
I don't like that brand anymore afterwards.
The goal with Landerra is I would like for people to have a really, really nice experience
and then to be like, oh, man, I love this.
I really want to upgrade.
What you're upgrading for is I'm trying to structure it such that there are things that
is a joy to upgrade for.
The only restrictions at the moment, the hobby app now has eight hours of sleep.
The idea is probably if you're just starting off with a free app and you're trying it out,
it doesn't need to be running while you're sleeping.
You can configure what time zone that is, but it doesn't run.
Then the push up to hobby is to be like, well, look, I'd like this to run 24 seven.
Then that unlocks that.
Then the pro plan is more structured at a professional context.
If you're putting a custom domain name on your app, you're probably using it in more
than a hobby context, not always.
If that upsets you, feel free to contact me.
The idea is to structure it.
Actually, I have to full disclosure, very much influenced by Dillon's thinking on how
to structure product stuff.
Dillon gave me a lot of advice about that.
The idea is just to make that a pleasant up and that it's very clear for you that you're
upgrading because of these benefits that will make it delightful, not because of artificial
pain that Landerra has imposed on you to force you to feel bad about.
Right, I really like that experience as well of being able to try something out and really
try it out.
Then when you need a professional thing with a custom domain, it's like, all right, you're
probably making money on this, so it's time to pay a little bit of that back.
I like that a lot.
What you're saying is you prefer not having that rename symbol feature instead of having
it limited to seven.
I prefer not having the microaggression cycle of an uptick of hope and then a plateau of
confusion and then a downward spiral of anger.
If we can remove that little interaction.
And the cycle.
But actually, you thought about the hope of paying.
When you pay, then you get that hope back.
Oh yeah, I see.
But it's bigger.
But a little bit of your soul dies.
Yeah, with a sidecar of anger.
I don't know.
But yeah, it's all very, I should say for those that have made it this far, everything
so far is very early days.
I'm really stoked to get feedback.
I've had a few people already reach out, including a couple of companies, be like, can you give
us more information about XYZ?
We're really excited, but we don't understand these things.
That's really talking to me.
That feedback is super helpful right now.
And then yeah, I should add while we're on the topic of pricing, I've put in the enterprise
kind of ala carte stuff at the bottom, which is like, there's a whole bunch of extra stuff
that's possible.
But yeah, they're conversations that we need to have at this stage, including source available
licenses, dedicated instances, different deployment models, even on premise installs.
Some companies have really strict restrictions on what they can do.
So yeah, again, the version one kind of signals that these conversations and stuff are now
ready to be had, and I'm happy to have them.
And to me, if I'm putting myself in the shoes of someone trying to put forward Lambda as
an option to evaluate for a new project, I really want to, the things you've laid out
and the pricing are really great there, and then I want to know that there's that, like
I said earlier, there's not a dead end and this sort of ability to reach the outside
world from the backend using HTTP, like sending emails through SendGrid or whatever email
That's huge to me because that is an escape hatch.
It's an escape hatch that doesn't destroy the guarantees of William Darrah, but it gives
you the ability to reach out and do what you need to.
In fact, even like, you were saying that Lambda doesn't let you call JavaScript from the backend,
but in a sense, you could argue that if you have HTTP, you can, you just spin up a serverless
function, write some JavaScript, call that serverless function from Lambda's HTTP and
boom, you're calling JavaScript, but in a safe way that's not going to pollute those
So that would give me a lot of peace of mind for making those business decisions, I think.
Although there are a lot more ways that it can fail because of HTTP.
It's still JavaScript and now you have HTTP.
Oh, sure.
Oh, of course.
Well, I mean, if you can avoid it, that's great.
But if there's no way to get around the things you can do just within your Lambda sandbox,
then I would start to get nervous betting the business on that or betting the project
on that.
And if you can reach out and you at least know you can do that if you need to, and in
fact, if you really reached a dead end and you're like, you know what, Lambda, it turns
out we do need web scale, we do need 10,000 requests per second, and we've been wildly
successful and Lambda is no longer the right fit.
Well, maybe you start incrementally migrating things to code that you call through HTTP
and eventually the whole service is migrated.
But anyway, that one escape patch there, to me, seems like the peace of mind that you
would need to be able to try it out for a project and evaluate it as a viable option.
So I think that's great.
Very exciting.
I mean, even if you have too many requests, too many users at the same time, I'm guessing
that that's what the Enterprise Edition is for, like just talking to Mario and asking
for bigger servers or something like that.
So, yeah, it's a slightly fun one.
So when I started out, the interesting thing with something like this, I suppose you guys
resonate with this also as package authors and library authors is like, you spend so
much time thinking and worrying about what could go wrong.
And then three years later, someone's like, did you think about X?
And it's like, usually, I feel like 99% of the time, the answer is yes.
And it's totally fine.
It makes sense.
I actually, I've tried to reframe the way that I feel about that feedback because sometimes
it can feel really negative and really intense.
But I've tried to reframe it to be like, well, actually what this user is articulating is
that there's something they didn't find clear.
It's an opportunity for me to be like, oh, okay, how did you end up worrying about this?
The HTTP one that's come up a lot recently, it's like, how did people get to this presumption
that there's no HTTP?
Because that was always a feat.
It's not an escape hatch.
That was always there from the beginning.
Elm functions in the backend identically to how it does in the frontend.
We don't say, oh, HTTP is an escape hatch for Elm's frontend.
It's just a feature.
So yeah, one of the ones that I've thought about a lot is scale.
And one of the things that kind of convinced me that maybe this wasn't such a crazy idea
is Martin Fowler wrote about most of the ideas in the Lambda era, but under a completely
different name.
I found it by accident ages after someone else mentioned it.
And he wrote about this concept of a memory image, I think is what he calls it.
He was talking about it in the context of event sourcing.
So event sourcing, just really briefly, is basically this concept that rather than handling
a request and then mutating your state and then forgetting about stuff, that you have
this alternate architecture, which is you treat everything in your system as events.
You write all of these events to a stream, and then your production state is basically
a function of those values.
So if you want to recreate your state at some point, you can replay all of your events or
If you want to debug or you're trying some logic that needs to change, you can basically
tweak that and then replay all the events and then get to where you are.
And actually, we used event sourcing at a past company.
So I already was really familiar with these ideas.
And I think they influenced the architecture that Lambda has today, which is literally
Or rather, the Elm architecture is effectively event sourcing, if you keep a hold of the
original messages.
And so yeah, immediately the problem that I thought of was like, oh, this thing's going
to run out of memory at some point.
I'm like, well, how much?
I was like, actually, I wonder what it will.
So what's the addressable memory space for JavaScript or Node.js in the backend?
And they've changed that.
It used to be really capped, like the 2 gig or 4 gig or something.
And now it's the addressable space of the 32 bit RAM index.
It's effectively unlimited.
And so I was like, OK, well, what's the maximum memory I could practically provision on a
And so I think Google's maximum is like 4,096 gigabytes, like 4 terabytes of RAM.
Yeah, and Amazon's like 3,900 gigabytes of RAM.
And I'm like, OK.
So I don't know.
I mean, that would be an absurd monthly cost.
But if we're just talking about how far can a Lambda app vertically scale, I haven't yet
figured out what that limit is.
The quicker limit, just to nerd out on it for a little bit, the quicker limit that you
seem to hit seems to be that just JavaScript data structures don't scale super well natively.
If we think about what an Elm dictionary is in its essence, it's really quite simple and
But a JavaScript object that backs that when it grows really, really, really large, it's
quite inefficient compared to, for example, say, if you were looking at a dictionary in
like Haskell or Rust or even like C or C++ or something, right?
Like they'd be much more constrained in JavaScript because it's a dynamic language under the
It has all this extra stuff that we never use, but it's kind of there and kind of grows
with like all the object prototypes and a whole bunch of other stuff and weird stuff
to do with string handling.
So there are definitely things to work on and optimize, but there's definitely things
to explore.
It's not a dead end.
So I'm excited to explore that stuff one day.
And yes, you're right.
You're on entirely like right now, the only people that have hit limits have basically
been modern doing some crazy ambitious stuff.
And they've been artificial limits.
Like right now, I've just got some artificial limits in Prod just to protect the resources
from going down unnecessarily.
And yeah, the enterprise kind of offering is like, yeah, let's just get you a giant
VM and let's see how far it goes.
Like that would be the main avenue to train for now.
Really exciting stuff.
So I think we could easily just roll right into a part two episode right now.
But one thing I wanted to touch on briefly is just like to me, Elm is, you know, Lambdara
demonstrates why Elm is really exciting because the types of things in it, it enables when
you have pure functions and managed effects and Lambdara takes that and does a lot with
that premise.
And like what, like in the same way, like you take this premise of Lambdara.
What if there was no glue code between the front end and the back end and everything
was type safe, including migrations.
And then what emerges from that?
And like, you know, Martin Stewart gave this talk, which we'll link to at the most recent
Elm online meet down and presented this like a prototype of an end to end testing tool
he built to do end to end testing with like real time, multi client connections to a Lambdara
back end where you're running, executing tests and is actually executing the Lambdara back
end and even replaying it in a UI and you can time travel and see the state as multiple
clients interact with it in an end to end.
It's one of these things, you know, Martin described it as like I started pulling on
these threads and it actually like came together better and better, not like rather than starting
to fall apart and realizing all the places it didn't work.
I just feel like that's a really exciting thing about Lambdara.
I'm just curious, like, are there other things like that that are like mad science ideas
or things you're pondering that like what boundaries might Lambdara push down the line?
Yeah, absolutely.
That's a really great question.
So I feel like I can't really take credit for this because it, I think what it comes
back to is kind of what we started with, right?
Like if we have this new terminology now and we say like, why does this happen?
I think the reason that Martin was able to be successful for that is because if we talk
about semantic boundaries, like there are none, right?
Like, cause it's so constrained or maybe to flip it, like why would someone going on a
20 hour yak shave to build a full end to end testing framework in a language like Ruby
or JavaScript seem insane?
Like if I was a boss and a team member was like, Hey, you know, I know I had to do this
for the deadline, but I went off and started on this thing.
You know, I feel like my first reaction would kind of be like, that's really futile and
Like, you know, like there's already a testing framework out there that people have spent
millions of hours, you know, like just or whatever it might be like our spec, you know?
And it seems to me now the reason for that is because of semantic boundaries, right?
Or you do a test framework for Ruby, you're dragging everything, right?
Like how am I going to test my JSON boundaries?
How am I going to test HTTP?
How am I going to test GraphQL?
How am I going to test like Postgres?
How am I going to test ActiveRecord?
Oh, there's an ORM.
You know, like they're all, they've all got these subtle things in there and it's hard
for us to like, you know, I think it feels like our language, we say, Oh, that's complex
and that's as far as we go, right?
We don't have a lot of detailed words to talk about the kind of like, well, I mean we do,
but we don't use them normally, I feel.
Maybe not outside of like a scientific context.
So yeah, like because it's so constrained, there's no semantic boundaries.
I think there's tons of really ambitious stuff.
Like the stuff that Martin did is, like once you see it, it's almost obvious, right?
You're like, Oh yeah, because the, cause it's all constrained and they're all values.
Like it makes sense.
Like you can, now that he's explained it, you're like, Oh, I could even conceive of
how I might try to do that myself.
You know, like it's, it makes sense.
And again, I don't think that's new.
I think I can't really take credit for that.
That's kind of like a property of Elm, right?
And I think a lot of us feel that way.
Like, you know, we've got, I mean Elm pages and Elm review on this call, which I would
say you guys probably make maybe Dillon, unless you deal with a lot of messy boundaries and
the data source stuff, but you know, in principle, like as long as it remains in Elm world, like
it's easier to be ambitious.
So yeah, I think there's, I think there's a lot of, I think there's a lot of cool tooling
about potentially, I I'm nervous to talk when progress too much about stuff that might happen.
But yeah, for sure stuff to do with types, stuff to do with editing data in the backend
in production in a type safe way, then yeah, even more ambitiously, perhaps stuff to do
with being able to use the type information that we have about an app to automatically
scale apps horizontally, which is a whole topic in and of itself.
But that is something I would really love to explore in future.
I think we've talked about this privately at some point, but I don't know if it made
it into thunder about making very efficient requests between front end and backend and
backend front end.
Do you do compressions about the messages that are being sent because of what you know
is in there?
Yeah, that's, that's also a good question.
So the wire, so I, at the end of the day, and again, this question again comes up a
lot because people are curious at the end of the day, that data, even though it's expressed
the concept of sending data between the backend and front end is expressed as a language primitive.
Like you don't see any encoding, you don't see anything there.
Basically it still has to, in the backend, Landera has to actually take that data and
code it somehow, send it somehow over some sort of protocol to the backend and then encode
and transfer it.
So that in Landera is called wire, which is kind of just a legacy naming from kind of
the early days of the project when I, when I worked on some of the really early stuff
along with Philip Huggland.
And yeah, basically what wire does, it's a, it's a custom binary format that is intended
to be super compact.
So I think in most, like for example, if you send a, if you have a custom type where all
of the variants are just non parameterized variants, so say it was like ice cream equals,
you know, chocolate, vanilla, strawberry, sending one of those in JSON, you'd at the
very least have to put the JSON bracket, you'd have to put a string label.
Oh, you could, I mean, you could choose to like manually encode it as an integer, I guess.
And so yeah, and, and why that'll just be a single, a single byte, right.
That that'll get compiled down to.
So where it gets nicer is like if you have really big data structures, lots of custom
stuff, pretty much most of the custom variants will get down to one byte.
It's just one of the, one of the releases actually was all about improving wire stuff.
So if you, if you check out the releases page on the Lambda dashboard, one of them was to
do with wire.
Yeah, it was alpha 12 actually, the release before version one.
And I have an example of like how big the wire values used to be and then what they
became as I kind of optimized it and tweaked it.
And so yeah, that's like fully like a language implementation thing that sits behind everything.
And it doesn't impact the semantics of anybody's apps, right?
So that nobody needed to change anything for me to just silently make those improvements
in the backend.
And yeah, right, right now they're, they're focused on being really small, like so as
in kind of minimizing bandwidth, but not necessarily minimizing latency or processing.
So what I can imagine perhaps being things to explore in the future, I'm definitely not
like an expert at optimizing on performance, but I know that there are some people in the
community that really are.
So what I would really love to do eventually is just publish that wire format and publish
publicly the code gen that happens for it.
Because not only will it open it up to maybe people saying like, Oh, actually, you know,
I'd like a CPU optimized one for our use case or different one optimized for our use case.
And I think those potentially could just be settings for your apps, right?
Like they don't change anything.
You could just change it in the dashboard and it just changes behind the hood.
But yeah, the other part of it would be then opening up bigger interrupts.
So you know, I would really love there's like this Elm web app, something built by, by,
I'm not sure exactly what app is it.
I think it's Choon kid is a, is an Elm enthusiast based in Singapore that I met once when I
was there.
He did the Elm web app is basically a little framework for serverless apps in Elm.
So being able to make a little, little Elm kind of end point, he's got a little API for
how you do request response and then like packaging that and deploying it on like an
AWS serverless function.
So you know, right now today you could use that with Landerer with the HTTP interface.
What I would really love is to like have that, that, that standard published.
So if you have lots of different Elm things in different places, you could potentially
drop in wire to get that same Landerer feeling to be like, well, I've got the Elm type there
and I've got the Elm type here.
I know in runtime I'm going to use wire to communicate between the two, which means now
at compile time I can use the Elm compiler to type check between the two.
So yeah, that's another, another thing that I think will shake out eventually that I'm,
I'm pretty excited for.
Really good stuff.
Well, I think we could easily fill episode two, which actually I would love to do a part
two, but for now, um, w where should people go to, to learn more about getting started
with Landerer?
Yeah, definitely.
So is probably the entry point that's Landerer without a B. So L A M D E
R A. Um, it's a common typo, Lambderer, which I think every time I see it, I think it's
I can hear some regrets in there.
So I'm not necessarily a regret.
I just, yeah, I probably didn't think through that.
Probably most people intuitively know how to spell Lambda with a B. So anyway, aesthetically,
I think it's more pleasing without the B. Yeah.
So, uh, I've got, uh, kind of, um, yeah, there's an entry point there to
go and get, to download the, uh, um, Landerer compiler binary, which is just single binary.
And that's it.
You run it, you get your full stack, local environment, no databases, no dependencies,
no, nothing else to install.
Um, you don't even need node JS to run it.
And then yeah, from there, there's the entry point into the dashboard.
I've tried to do, um, kind of a lot of documentation.
There's a guide on converting Landerer apps.
There's a guide on starting them from scratch and there's examples there.
Um, very recently also I posted and with the release, I kind of posted a Landerer real
world implementation or more accurately, I should say a Landerer style, a real world
Cause I think, you know, part of the real world specs, part of the real world spec kind
of mandates that you're using HTTP and JSON so that the front and the back can be split.
And I'm like, yeah, that's cool.
But like that trade off gives you a certain code glue costs, right?
So what I really wanted to do with Landerer real world was kind of concretely showcase.
So on the, on the GitHub, you can actually see the two pull requests where in stages
I convert, um, actually Ryan, Ryan, um, Ryan author of, um, Elm spa, um, did an Elm spa
version of real world.
And I have a step by step conversion of that into Elm spa with Landerer real wealth.
Uh, so yeah, that, that might interest some people to see like concretely, what does it
take to convert and what does that glue could actually look like?
If you want to tangibly point out, we've been talking very hypothetically about it, but
if you want to see it, you can see exactly what gets removed.
And then, yeah, I would totally shout out, um, the discord.
We have, we have a ton of people in the Landerer discord and they are all super friendly.
Most of them now beat me to answering support requests, which is awesome.
So that, yeah, really like super friendly and super, super enthusiastic people.
So yeah, if you want to chat about anything or ask questions or anything's unclear, yeah,
that feedback is super welcome and I'd love to have you there.
Um, so yeah, that's probably it.
Great stuff.
Well, thanks again, Mario for joining us.
It was a pleasure to have you on.
No, thank you so much for having me.
And Jeroen until next time.
Until next time.