Alpha Software Blog



What is Server-less Computing and Why is it Exciting?

With all of the talk, perhaps hype, around “server-less” computing these days, it is easy to get lost in jargon. After all, how can you run computer code without having a computer? In short, you can’t.

The attraction here is the idea that you pay for only the computing capacity you require, not for idle servers run twenty-four hours a day and seven days a week.

Let’s look at a little history and explore how this whole “server-less” thing came to be; and more exciting, where it is going!

 

X-as-a-Service

The suffix as-a-Service can be tacked on to just about anything from Infrastructure-as-a-Service (IaaS) to Platform-as-a-Service (PaaS), Software-as-a-Service, Functions-as-a-service (FaaS), Database-as-a-Service (DBaaS). And the list goes on; but what does it all mean?

 

Virtual Machines

The term Software-as-a-Service has been in usage for a while now, replacing Application Service Providers (ASPs) in popular use. It really refers to an application that can service multiple customers by hosting the application for them. It is a welcome product of the internet and web servers, but not really a new technology by itself.

The path toward the current explosion of “–as-a-Service” products really began when the first virtual machines were introduced. Companies like VMWare and open source projects like Xen (see https://www.xenproject.org/about/history.html for a history of the Xen project) began by emulating a whole computer using a program called a hypervisor, making it possible to define the entire environment of the computer with declarations in configuration files.

A computer could now run other computers as if they were programs. A Windows™ or Linux computer could run one or more Windows™ or Linux computers, store the state on disk, pause them and even copy the disk state to other computers and run them.

At first, virtual machines were slow. As virtual machines matured, the processor manufacturers introduced features in the chip sets to make running the emulation more efficient (in fact less emulation and more execution). New features included “live-migration”; which allows a running computer to be paused, copied and restarted on another physical machine.

A hypervisor programming interface (API) was developed by each vendor to allow automation of the whole process.

Computers were now “software-defined”. And soon, so were networks, routers and load balancers. Infrastructure management was now possible as a software-managed service!

 

So, what does it mean “server-less computing”?

When we refer to “server-less” computing, what we really mean is that the service that runs an application or function is really managing the provisioning of the system for us. This falls somewhat under the category of Platform-as-a-Service (Paas) because platforms tend to create a place to run applications and functions without having to provision the infrastructure (networks, load balancers, machine instances and such).

So there is no magic; and that is not a bad thing. Managing infrastructure is expensive and complicated. Setting up a physical environment in-house can mean configuring networks, buying redundant servers, routers, and load balancers as well as assuring that power, HVAC and Internet connectivity are all available. This requires both capital expenditure and labor – sometimes around-the-clock labor!

 

Enter “The Cloud”

Cloud services such as Amazon AWS, Microsoft Azure, and Google Cloud have helped to remove the physical aspect as well as much of the labor from the equation by providing automated infrastructure, platform and database services with consoles to manage them. These services make it possible to provision almost immediately and to scale as needed without having to invest in capital equipment or an army of hardware and network experts. The first implementations were based on the virtual machines discussed above.

 

Alpha Cloud

Square Alpha CloudThe Alpha Anywhere product line helps you create compelling applications for mobile and desktop devices without having to become an expert in a variety of technical areas. With the introduction of Alpha Anywhere Application Server for IIS and Alpha Cloud, you get the same benefit with respect to deployment. You don’t have to become an expert, because Alpha Cloud handles the provisioning, scaling and disaster recovery for you.

Alpha Cloud deployments are usage-based. You only pay for the computing capacity you need. Usage plans help add some predictability for cost management. Much like a cell phone, cloud computing is a utility but for application deployment.

Many developers new to Alpha Cloud often ask “How do I connect to my server?” The simplest answer is “You don’t!” A better answer is “You don’t have to.” Alpha Cloud currently runs on Amazon AWS, spinning up instances of servers using Amazon AWS Autoscaling. Autoscaling is a service that interacts with a load balancer and a collection of virtual machines to make sure that machines are healthy and that there is just the right amount of computing power to run applications installed. The rules required to manage scaling are all defined as part of the configuration.

Alpha Cloud builds on the automation APIs for Amazon AWS by assigning web sites to groups of servers with spare capacity, and if no group has the spare capacity, starting a new group of servers. To make sure that applications are always available, Alpha Cloud creates at least two servers for each group in separate data centers (called availability zones).

 

What if something goes wrong?

Computers crash, applications fail, networks have interruptions of service. Sometimes this is a result of a software defect. Sometimes this is a result of a hardware component failure. Stuff just happens.

Alpha Cloud takes advantage of best practice architecture developed by Amazon that assumes that things will go wrong and works to mitigate problems before they even happen. Alpha Anywhere Application Server for IIS (used on Alpha Cloud) takes advantage of Microsoft IIS application pools to scale processes and to recover from application failures.

  • If a machine fails, Amazon Autoscaling starts a new one.
  • If the load on the machines in a group exceeds a predefined threshold, Amazon Autoscaling starts a new one.
  • If the load on the machines in a group drops below a certain threshold, Amazon Autoscaling terminates one or more of the machines to save on cost.
  • If an application stops responding, the application pool terminates the process and starts another one.
  • If a process crashes, a new one is automatically started by the application pool.
  • Since processes tend to perform better if they are restarted periodically, Alpha Cloud configures the application pool to restart all of its processes once a day. This is done in an “overlapping” fashion, meaning that new processes are started before IIS redirects traffic away from the old ones and then terminates them.


So is Alpha Cloud Server-less?

From the subscriber’s perspective, there are no servers on Alpha Cloud; only deployed web sites and applications. You do not need to manage servers on Alpha Cloud. Currently, some of the dialogs show you the servers (yes plural) your web site is assigned to. In the future, this may be hidden from you entirely, and for a number of reasons.


The Containers are Coming!

So Alpha Cloud is built on clusters of virtual machines. Virtual machines are great! They are self-contained descriptions of an entire computer that can be backed up, moved around, started up and shut down so they are available as needed.

But virtual machines also take time to start. The virtual computer still has to run all of the code to boot up as a physical machine. And virtual machines also use all of the memory required for the virtual machine on the host computer (the one running the hypervisor that controls the virtual machine).

A set of operating system features introduced on Linux in recent years has made it possible to run a process that “thinks” it is a separate computer, but shares the installed operating system with the host computer. These became known as “containers”, because, like virtual machines, they are defined as if they are separate computers. They “contain” all of the assets required to run one or more applications.

Docker is one of the best known technologies implementing containers (although there are several). Microsoft introduced Windows Containers with Windows Server 2016; quickly embracing Docker. Although the Windows™ implementation has lagged the Linux implementation of containers; it is evolving quite quickly.

These lightweight “containers” can be started in less than a second in many cases. Because they share the operating system code with the host, the memory footprint is smaller as well.

In fact, containers are so lightweight and quick to start, that they can be used for one-off/batch and special processing requests and then shut down and forgotten. Or you could run a lot of them at once behind a load balancer.

So containers are disposable. In fact they are defined to be immutable. In other words, they are not expected to be saved and restarted, so they don’t need to be patched or upgraded. You just create a new container when it changes.


Herding Cattle

Containers are great. If there are one or two, or ten, maybe even twenty, it’s easy to find and manage them. Get enough of them and you feel like you are herding cattle (for a fun read on characterizing servers and containers as pets or cattle see http://cloudscaling.com/blog/cloud-computing/the-history-of-pets-vs-cattle/).

The next step in the evolution of containers is a way to manage hundreds, or even thousands of them – called “orchestration”. Orchestration takes all of the separate pieces that make containers run and groups them into services similar to the Amazon AWS Autoscaling groups we discussed above.

The current front runner for managing containers is a project created by Google and open-sourced called Kubernetes. The big three cloud vendors have all adopted Kubernetes and have a service that creates a cluster of nodes to run containers.


Functions-as-a-Service?

One of the latest features made available by cloud providers is often, I think incorrectly, referred to as “server-less computing”. Functions-as-a-Service allows you to deploy a single function as a scheduled task or a web service. The granularity is now one specific web request rather than a full blown web site (you might call it a microservice).

What makes it feel “server-less” is that you don’t provision a server to handle the request. The provider has a service listening for requests that then fires up a container to handle one or more requests. A container is started. It handles pending requests, and then it can be destroyed. No more hourly charges. You pay for usage (CPU, Data Transfer and memory capacity).


The Future is Bright and Alpha will be right there for you!

Containers, Kubernetes, Functions-as-a-Service; all of these are becoming ubiquitous. Alpha Software intends to make use of the best technologies for deploying applications. This includes full blown web/mobile applications, Functions-as-a-Service/microservices and scheduled tasks. 

What will you have to do to take advantage of these new deployment options? If we do our job right, and we intend to, you will have to do very little. You might, for example, see a new option like (Container) in the web site definition dialog of Alpha Cloud instead of the region to deploy your web site in that you see now.

If you no longer have to consume precious time dealing with servers to provision the deployment environment, load balancing, redundancy, scale-ability, server utilization and like, you can focus on building a great application! Whether Alpha Cloud is running on virtual machines or containers or the next great thing to come along, your investment in software is protected. You won’t have to re-architect your solution. Just pick another deployment option in the Alpha Cloud dialogs.

So you tell me. Isn’t your application on Alpha Cloud already “server-less”?

Learn more about Alpha Cloud 

Prev Post Image
Alpha Developers: Grow your Business Rapidly with TransForm
Next Post Image
Speed Digital Transformation By Speeding Up App Development

About Author

Kurt Rayner
Kurt Rayner

Kurt Rayner drives product architecture in areas including object modeling, language, code generation, and database integration. With more than 25 years of experience, Kurt's passion is creating architectures for tools that empower developers to build sophisticated, cost-effective business applications. Prior to joining Alpha, Kurt ran a software development consultancy, directed the development team for PowerBuilder at Sybase Inc., and developed software and managed development teams at Boston Financial Data Services, Cullinet Software, and Foxboro. He holds master's degrees in computer science from Boston University, and education counseling from Boston State College, and a bachelor's degree in psychology from Barrington College.

Related Posts
No-Code, Low-code ERP System For a Rail & Power Station Manufacturer
No-Code, Low-code ERP System For a Rail & Power Station Manufacturer
10 Top Tips for Low-Code Deployment
10 Top Tips for Low-Code Deployment
Agency Data Used Alpha Anywhere to Revolutionize Home Health and Care
Agency Data Used Alpha Anywhere to Revolutionize Home Health and Care

The Alpha platform is the only unified mobile and web app development and deployment environment with distinct “no-code” and “low-code” components. Using the Alpha TransForm no-code product, business users and developers can take full advantage of all the capabilities of the smartphone to turn any form into a mobile app in minutes, and power users can add advanced app functionality with Alpha TransForm's built-in programming language. IT developers can use the Alpha Anywhere low-code environment to develop complex web or mobile business apps from scratch, integrate data with existing systems of record and workflows (including data collected via Alpha TransForm), and add additional security or authentication requirements to protect corporate data.

Comment