What's your perspective on and experience with serverless backends?

Serverless has been quite a prevalent topic in our industry in the past few years, and while there are a lot of sceptics, I think it’s safe to say that serverless is here to stay.

What I would like to know:

  • do you have any experience with serverless?
  • if yes: what did you use it for? Which tech did you use? How did it work out for you?
  • if no: why not? Do you still plan on trying it, or are you staying away on principle?
  • is there anything you’re excited for in this space?

EDIT: Just to clarify, when I write “serverless” I’m talking about things such as cloud functions (AWS Lambda) and the architecture and paradigms coming along with those. An interesting book on the topic is Serverless Architectures on AWS by Manning.

3 Likes

A post was split to a new topic: Types of web/app hosting

I feel @AstonJ is spot on in his analysis.

To answer @wolf4earth, I don’t have a personal experience and I am cautiously optimistic. It can be seen as a fad especially when you see youngsters immediately jumping to use it… with Node.JS CLI apps. :man_facepalming:

Cold starts can be a serious problem. So languages that compile to native binaries – and don’t carry a huge runtime with them that needs a bootstrap on every app start – have an advantage. Off the top of my head, people had positive experience with “cloud functions” in C, C++, D, Rust, Zig, Golang, Nim, OCaml, Haskell, likely others as well. Not so great luck with dynamic languages, Erlang/Elixir included – JS and Ruby weren’t in great shape either, Python and PHP are also rather slow to start.

I’m open to the idea but I’d think that you have to reshape your app and services – likely even your business – to make serverless [cloud functions] work well for you.

One area I am hugely skeptical is that the big vendors will do their absolute very best to lock you in their garden. So the idea is good but I feel is being weaponized for business interests.

That, plus many people really love their JS and don’t want to learn languages like C/C++, Rust or OCaml, which have one of the very best startup times: <4ms for a hello world program. So in the end I posit that the idea might die simply because the JS runtime (Node.JS) can’t be made to start so fast.

3 Likes

I’ve updated the topic to clarify that I meant to discuss serverless in the scope of cloud functions and the architectures which are born out of that.


Cold start is indeed an interesting topic, @dimitarvp, but I think the discussion requires a bit more nuance than “faster cold start is better” and “native binaries start faster”.

You might for example be surprised to hear, that NodeJS seems to have better cold start characteristics than go - and that on all major cloud providers (AWS, GCP, Azure). In general the cold start characteristics of NodeJS rank among the fastest in languages commonly used in cloud functions.

And as mentioned before, faster cold start isn’t necessarily better. If your cloud function is consuming messages from a queue to send out emails, or write into a DB or an S3 bucket, or call a remote API, then cold start characteristics become pretty much irrelevant. Of course, when you actually want to run your API through the usage of cloud functions, then yes, cold start becomes important.

I’m not trying to say “use JS for all the things!” - I’m not that fond of JS myself. Instead I think that serverless - and cloud functions - is an interesting new pattern where seemingly important topics (startup times etc.) suddenly become a lot less important, depending on how you design your application.

If you ask me, I think the serverless paradigm has a bright future wherever you might want to “glue” out-of-band things together or where you have to handle ludicrous traffic spikes. A great example is Square Enix which used AWS Lambda to offload image processing for screenshots taken in Dragon Quest X, which experiences 20-30 times the usual traffic a few times per year.

TL;DR or in a nutshell I think that cloud functions, and by extension serverless, can allow us to build applications, or parts of a larger application, with characteristics usually reserved to much more complex deployments.

Will it replace “old” deployment strategies? Hell no! But will it become an interesting new tool in the deployment toolbox? I think yes.

Resources

4 Likes

Not very surprising having in mind this part of the article:

With that, every language and runtime can be quite fast with a warm instance.

But the trouble is, every hosting provider’s goal is to reuse machines as much as possible and to charge as much as possible for the least amount of server power possible. And the cloud providers periodically stop the warmed-up instance. And if you have JS or even Elixir, the start up times are still pretty bad.

So having that in mind, I still think the languages compiling to a native binary have a big advantage.

I admit I am skeptical. The problem might be contained in this description: “among the commonly used”. It’s a fledgling area and I am not sure how much people tried how many languages and runtimes. JS’ V8 runtime engine is quite good, I know, but it’s hard to beat something like Go or Rust. So we might have a selection bias here?

Absolutely agreed. Breaking programs into smaller, more specialized parts, has always been something that improves maintainability – long before the “cloud” has even been a thing. So I am fully for it! I am just kind of wary of the big providers; they don’t have our best interests in mind.

I’d likely be very happy to have something coded by the OSS community which can be transposed onto AWS / GCP / any-VPS-provider but not sure how viable that is.

1 Like

I’m sorry to be this blunt: but have you actually read the article?

It mentions this, yes, but only to differentiate it from a warm start where reuse happens. The measurements are explicitly about cold start.

Here is the full context:

When Does Cold Start Happen?

The very first cold start happens when the first request comes in after deployment.

After that request is processed, the instance stays alive to be reused for subsequent requests.

And then further down (before the actual measurements):

How Slow Are Cold Starts?

The following chart shows the comparison of typical cold start durations for common languages across three cloud providers


Again: have you read the article?

Go is actually one of the languages measured. Go is actually measured in both articles, and in both articles it’s measured cold start times are slightly worse compared to JS.

I’m not trying to shill for JS here. I was quite surprised when I first learned about the low cold start times of JS. But the facts don’t seem to support the general statement that “languages that compile to native binaries have a faster cold start”, at least not in the case of JS.

I think it actually shows how much crazy optimization work must have been done on the JS V8 engine and on the cloud provider’s side to enable these times.

Doesn’t mean that I would opt for using JS for cloud functions but it’s impressive nonetheless.

2 Likes

I’ve split my post into a separate thread as it was more about standard serverless rather than serverless backends (also added that to the title and tags for this thread :D)

1 Like

Nothing blunt about your question at all, I did read all the articles you linked but was less impressed compared to you, it seems. I agreed that Node.JS has been optimized well. This has been visible.

I was surprised by Go’s results because, even though it carries a runtime and a GC it at least compiles to a native binary – but on the other hand, Go’s compiler prefers speed to optimization so its compiled binaries could be slow. So I guess it was to be expected. That was my one useful takeaway from the article(s).


But I did find their language selection quite arbitrary. Appealing to statistics (most popular languages) doesn’t make sense in this scenario: at least part of the people who want to use the cloud functions are keenly aware of the tradeoffs and thus would likely skip the dynamic languages (or those carrying a runtime with them) as much as they could. I’ve met people who started investing in learning Rust, OCaml and Haskell after being unhappy with the performance of cloud functions written in Python.

We are talking cold start times from 2ms to 50ms in the realm of those languages, depending on hosting provider (same experience as the article: Azure was the slowest, GCP was middle, and AWS was fastest, although the differences were less pronounced). So even a big-ish Rust program on Azure started at least 4x times quicker than Node.JS on AWS.

That very much depends what people use cloud functions for. I and a few others who chatted about it preferred small programs who (almost) don’t rely on state but I can see how others would try and replace a persistent service worker with auto-scaling on-demand cloud functions.

That is a perfect example for good usage of cloud functions. I feel like they were invented just for those bursty, auto-scaling usages, and they excel there.

2 Likes

I’d be interested in knowing what they were using originally - ImageMagick?

There are more efficient options around like:

The also didn’t try going with a 64-core server, and went straight to AWS - would have been interesting to see a comparison between a self-hosted ‘optimised’ solution vs the AWS one they ended up using…

2 Likes