I was very excited about attending the Voxxed days for the first time. The event was well organized and most of the talks were very informative covering variety of interesting topics and new trends in software development.
One of such interesting topics was the “evolution of the compute” into serverless deployment, covered by 3 different speakers at the event.
As I only attended two of them, I try to summarize these talks in this article.
The talk about “Serverless, the future of the cloud?!” by Bert Ertman on day 1 was an introduction to the concepts, benefits and drawbacks of the serverless architecture, whereas the talk of David Schmitz on day 2 “Real-World-Serverless – Going Lambda without being burned too much” went into more detail of how to deploy individual functions on AWS Lambda as well as further explaining the advantages and disadvantages of such an architecture.
These two talks were a good combination for getting a good overview of what serverless is, how you use it practically with Amazon AWS Lambda and what to expect of it.
As was outlined in the first talk, the evolution steps of application deployment went from physical (having servers in its own protected room in a company and taking care of its maintenance) to virtualization to cloud, then containers (Docker), all the way to serverless, where basically you don’t exactly know in which context the application is deployed.
The speaker also explained that in serverless, the functions are the unit of deployment and scaling. So basically, with an IDE plugin for a AWS Lambda you can easily upload and deploy an individual function to serverless cloud.
As well as that, serverless deployment is event driven, so the functions are invoked only by events – triggered by rest api requests, keyboard inputs or even from voice input. However, it’s not possible to directly invoke them by another lambda deployment function.
The obvious advantage is that developers don’t have to worry about deployments and server configurations or as the following phrase summarizes it: “No server is easier to maintain than no server”.
In addition to that, the payment concept is very efficient, since you only pay when code gets invoked. As Bert Ertman mentioned, it “truly approaches the Pay-as-You-Go philosophy once promised by the cloud”. Also scalability and fault tolerance handled by the cloud provider are some common benefits.
However, for some it can be a big disadvantage that you don’t really have any control of the context of your code deployment and resource allocation, meaning you lose the possibility of server optimization.
Some other notable drawbacks can be:
- Vendor lock-in
- No support for multi-tenancy
- Security concerns (increased surface for attacks, which if exploited could also result in a huge bill)
- Difficulty in testing
Some of the security issues related to serverless deployment with AWS Lambda can be, event injection and billing attack. In addition to that, the speaker also gave the advice that you should know your Per-Account-Limits and specifically pointed out that by now only 1000 concurrent executions are possible.
Some of these issues can be handled in various ways, such as setting up billing alarms, defining a security boundary for every lambda, setting up a security policy by declaring roles and policy on functions or using a security watchdog, like Snyk.
One noteworthy issue AWS Lambda deployment has is the cold start. It will happen once for each concurrent execution for one function. There is no SLA (Service Level Agreements), no hard details, just guidelines and the cold start time has no obvious correlation with the size of the deployed code. The possible solution of pinging the lambda to keep it warm to reduce the latency is invasive and wastes resources. Also AWS kills even warm lambdas after a certain unspecified time period.
Better solutions for cold start might be to measure user experience or to design with latency in mind, giving proper user feedback when doing so.
Summarizing some of the above mentioned points from the talk:
- Designing distributed systems is still hard
- Mostly different challenges
- You need a cloud-savvy operations team member
- DevOpsSec done right (thinking about security right from the start!)
- Tooling is still evolving
- It’s not a silver bullet!
- Sometimes a simple VM is enough (docker image)
- Less server that we need to take care of
- And..set billing alerts
I hope you read something interesting about this new hyped paradigm. You can find the slides of the second talk here.
I really enjoyed the Voxxed Days and I learned definitely something new.