According to a 2022 state of serverless report by Datadog, “Over half of organizations in each cloud have adopted serverless.” To some, ‘serverless’ is a buzz word that triggers the response, “but there are servers”. Others view it as a fundamental building block of cloud technology.
Either way, serverless computing is not new. If your organization has embraced cloud adoption, chances are you are using serverless.
What do I see as some of the most enticing reasons for adopting serverless?
This means we can proof of concept ideas without inducing a large cost upfront for servers and other infrastructure that run 24/7.
No need to have extra capacity for high traffic times, as serverless functions have almost limitless scalability. There can be many serverless applications running in different cloud regions and providers allowing for the greatest flexibility. You can optionally reduce latency by running nearer to your customers. You can still run containers, without having a container orchestration platform like Kubernetes running.
Is serverless a buzz word? Yes. Are there actually servers running these applications? Also yes, but the infrastructure is managed by the cloud provider. Your organization would have no servers to manage. No operating systems to patch and maintain. You can focus on your application code and delivering business value to customers, rather than spending precious time and resources managing infrastructure.
That said, as organizations continue to adopt and expand serverless, there are new challenges that arise.
Debugging, logging and monitoring are some of the biggest hurdles when operating serverless at scale.
Each cloud provider has their own tools for these purposes. For instance, AWS has a set of lambda functions with cloudwatch logging and monitoring enabled. This gives insights during a debugging session. But what if you want the flexibility to run in three different cloud providers? It takes up a lot of up front time to learn each cloud vendor's solutions for monitoring and logging. When there are issues across clouds, it takes even more time to context switch to another cloud provider while debugging. Serverless fits very well with a microservice model, as the best practice is to have small serverless functions for single purposes. This naturally leads to having a growing number of serverless functions that serve many purposes. You can write serverless functions in almost any language you want, giving teams greater control and choice of what language they use to solve a problem. Again, this advantage leads to more serverless functions, likely with different logging formats. This can make finding and searching through logs even harder. When you need application performance metrics, it can take a lot of time to instrument different languages.
Datadog gives you a unified platform that works across cloud vendors and across languages. Search logs with ease, with tons of automatic parsing rules and powerful search capabilities. The ability to enrich logs and metrics with contextual tags and custom tags makes finding what you're looking for much easier. Picture having a log system that you can query instantly by cloud provider, region, language, application, version, etc. No need to context switch between logging systems or remember complex query languages. Datadog can auto instrument application performance monitoring for Java, Python, Ruby, .Net Node.js, PHP, and Go applications, giving you a ton of metrics without spending costly up front time to manually instrument applications. This is when the true value of serverless can come out!
Need help implementing serverless functions or Datadog? Feel free to engage us via email at email@example.com , we’re here to help!