The computing industry has a history of hiding complexity to make things simpler and cheaper for users. Now, we're seeing it again with a concept called serverless computing.
Remember when the industry decided to do away with physical servers? Overnight, virtualisation saw companies switch to software-based virtual machines (VMs) in the cloud instead of buying physical tin. Then, containers did away with virtual machines to make things even more efficient, stripping away the full copy of the operating system that each VM required and instead forcing lots of containers to share the same underlying OS kernel. Now, serverless computing involves doing away with a constantly running server altogether.
Also known as functions as a service (FaaS), serverless computing finally realises a long-promised feature of cloud computing: you should only pay for what you use. In the past, if you wanted to run your own cloud-based service such as a currency converter, you'd have to keep it running on a virtual server even if you only used it infrequently. That meant you were still paying for that server's uptime when it was doing nothing.
Serverless computing changes all that. It doesn't start the service you're running until some other program asks to use it. Then, it kicks off that function somewhere in the cloud (the idea is that you don't really care where) to serve the request. Then, it shuts the service down again. The result, in many cases, is a lower computing cost because serverless environments can do what cloud engineers call 'scaling to zero', where nothing is running at all until it's needed.
It takes the right kind of program to run in a serverless environment. Your legacy monolithic ERP software with its many moving parts isn't well-suited to this operating model. Some part of the application will always be needed for something, and you can't just run that small part of the code; the whole application has to run all the time.
Where serverless computing does make sense is for microservices, where developers have broken up their applications into many small services, each doing just one thing. You can trigger requests to these services using an event framework that relays events between them. When one service takes an international payment, say, it can tell the event framework, which then wakes up your currency conversion service just long enough to serve the request. You'll have to design your software to support all that, though, which means an up-front investment now for cost and computing efficiency later.
The other issue is vendor lock-in and control. Serverless computing is a cloud technology, and different cloud service providers offer their own implementations. AWS has Lambda, a serverless computing architecture that works with its own data storage, application integration, and developer tooling. Google has Cloud Functions. Microsoft touts its own 'end to end' Azure serverless platform encompassing compute, workflow management, database technology, and DevOps. None of these platforms were written to talk to each other, meaning that if you throw in your lot with one service provider, you might find it difficult to switch later.
Cloud providers running these systems also come with vendor controls, such as limitations on how long a serverless function can run for and how many of them can run concurrently. That's part of the problem when you abstract away infrastructure and stop thinking about it - you also relinquish control over it and someone else gets to make the rules.
We're hopeful that this lock-in and level of control may change over time, though, thanks to increasing support for open-source technologies. These include Knative, which enables the widely adopted open source Kubernetes container management system to manage serverless functions.
There are options for companies that want full control of their own serverless functions. This, along with open-source frameworks like IronFunctions, can run in on-premises environments, as can Cloud Run, Google's Knative-based serverless framework. But for small businesses with limited IT expertise, self-hosted serverless will be a steep learning curve.
If you're creating a proof-of-concept or pilot systems, cloud-based serverless environments are a cheap way to test your assumptions and potentially create more efficient computing models. Just be sure you have a long-term architectural plan so that you can make smart decisions about what to build using these frameworks. You shouldn't save time and money now at the expense of control and autonomy later.