What Is Serverless Computing?

What Is Serverless Computing?

Serverless computing is a cloud model in which developers can write and run code without needing to buy, manage, or maintain servers. Despite its name, serverless computing still involves servers, but all the underlying infrastructure—servers, operating systems, virtual machines, and containers—is managed by a cloud service provider. Users normally pay per use, which often results in cost savings. In this article, we explore serverless computing, how it works, and its benefits and drawbacks.

What Is Serverless Computing?

Serverless computing is a cloud computing model that enables software engineers to build and run apps without worrying about provisioning, scaling, securing, and managing the backend. In the serverless model, the responsibility of server provisioning and maintenance lies with a cloud provider who manages and spins up the required computing infrastructure—servers, databases, and containers—on a need-to-use basis. Serverless and infrastructure as a service (IaaS) are components of the cloud.

Does Serverless Mean There Are No Servers?

Contrary to what the name suggests, the term “serverless” doesn’t mean that no servers are used. On the contrary, servers are an integral part of managed serverless services!

“Serverless” refers to the fact that customer teams aren’t concerned with the servers. This contrasts with traditional IaaS models, where businesses purchase and manage server capacity. Instead, the business’ DevOps team can focus on writing and packaging app code into the vendor’s containers, without having to worry about servers.

Key Serverless Features

Applications deployed in a serverless architecture have several features—they are hostless, elastic, event-driven, ephemeral, and stateless. Let’s examine what these terms mean.

  • Hostless: Instead of hosting applications on traditional servers, developers package app code in containers and deploy them directly. With servers out of the picture, enterprises do not have to worry about the significantly high setup and operating costs and the effort associated with traditional server environments.
  • Elastic: Since developers don’t provision servers, they aren’t constrained by server capacity so that resources can be scaled up or down nearly limitlessly. This makes it easy for enterprises to easily scale their systems as their bottom lines demand.
  • Event-driven: Code runs, resources are allocated, and scaling is done based on events, such as HTTP calls or queues. Events are generated and consumed in real time, regardless of volume.
  • Ephemeral: Functions are short-lived; they persist only as long as required to complete the events that triggered them. Since functions are deployed in containers, the containers and all resources within them are also ephemeral. This is a big plus, as ephemeral functions reduce the attack surface. Attackers often require some degree of persistence to conduct successful attacks, but the serverless ephemeral compute makes attacks more difficult to conduct since attack vectors are destroyed alongside functions and containers.
  • Stateless: Serverless apps do not store state (data from previous interactions and containers). Each new invocation or request is independent and will be handled by a fresh container on any available server. This means you can conveniently scale instances horizontally without worrying about any saved state overwhelming app functioning.

These features make the serverless model useful in modern software development. With hosts abstracted, nearly boundless horizontal scaling, high data and traffic velocity, and a reduced attack surface, serverless computing is ideal for several use cases.

How Does Serverless Work?

Let’s examine the six phases involved in serverless computing in more detail to understand how it works from a developer-provider perspective.

How serverless computing works

Step 1: Upload App Code

This process begins with a serverless provider offering dashboards and prebuilt backend elements to operate their serverless apps efficiently. A developer writes app code that includes commands specifying how the app responds to various queries. The developer then uploads the code to the service provider’s account.

For example, say you want to build a serverless gaming app. You’ll create a serverless account with your service provider and write the app code in a pre-packaged template offered by the provider. The code will contain commands specifying what the app should do when a user clicks on a specific game character (e.g., a racing car) or requests a single-player rather than a multiplayer game. The code will be written in a file, e.g., Go or Node.js, where your app and cloud infrastructure are defined and stored as code. Then, you’ll open your command line interface (CLI) and upload the packaged app—which includes code, event sources, and API gateways—to your account.

Step 2: Define Event Triggers

With the app code uploaded, the developer defines event triggers. Events are “if-then” objects that contain information about how and when app code will be triggered, such as queues, HTTP requests, GET requests, and API calls.

When a serverless app detects queue messages, it polls the queue, reads the messages, and triggers the app code. Depending on the service provider, you can configure a single app to be triggered by multiple events.

For instance, you can configure your gaming app to trigger app code using message queues when in-app purchase requests are detected. You can also trigger app code using GET requests when users attempt to fetch their score histories. GET requests structure client responses, which is important for generating elements like score histories and bank transaction histories.

Step 3: Intercept User Requests

The software receives a user request and triggers the app based on predefined event triggers. Let’s consider the HTTP requests event type. Say a user initiates an HTTP request; the software intercepts the request and triggers the app.

Returning to the gaming app example, for the message queues and GET requests to function as specified, software or a load balancer must first intercept user requests and determine the request type (e.g., in-app purchase or score history), then respond by triggering the app code.

Step 4: Spin Up Resources

Once the app is triggered, it relays the event to the cloud provider, who spins up the requisite number of containers and allocates the resources necessary to serve the user request.

Imagine your gaming app sees traffic of between 15,000 and 110,000 users. It’s currently at 20,000, with users sending out varying requests—making purchases, choosing characters, selecting features, and viewing scoreboards—each of which constitutes a microservice. As traffic rises, the provider ensures that resources are spun up in time to respond to rising numbers of requests. The provider also spins up the containers needed to run each microservice.

Step 5: Serve Clients Query Responses

The container then pulls the requested information from a datastore and sends it to an API gateway. From there, it is forwarded to the web page, providing the user with the query response.

At this point, users of your gaming app who request their score histories, attempt to start a new game, or make in-app purchases are served their requests.

Step 6: Spin Resources Down Once Traffic Decreases

Once all requests have been served, all containers and their data are deleted, to be spun up again when new events trigger the app code, e.g., when incoming requests are detected. This is referred to as zero scaling (where containers are scaled to zero), a subset of autoscaling.

You can feed the serverless architecture with metrics to spin resources up or down as user numbers vary on your gaming app. For example, you can use metrics such as the average number of requests per container or the average processing time for incoming requests.

Service Types in the Serverless Architecture

The serverless architecture consists of various service types—including FaaS, CaaS, edge computing, and DbaaS—united by the serverless features discussed above.

  • FaaS: Function as a service (FaaS) is an event-driven serverless computing architecture where apps are deployed as functions triggered by events.
  • CaaS: Serverless containers as a service (CaaS) is a model where users run containers using a service provider’s API or web portal, while the provider manages the containers and underlying infrastructure on their behalf.
  • Serverless edge computing: In serverless edge computing, computing resources and execution occur at the edge, i.e., in close geographical proximity to the end user.
  • DBaaS: A serverless database-as-a-service (DBaaS) is a fully managed database that serves data to apps via well-defined API gateways. Serverless DBaaS allocates and spins compute/storage resources on demand.

Serverless Computing vs IaaS

Serverless computing and infrastructure as a service (IaaS) are both prominent cloud computing models with the major differences being the scaling mechanisms and the cost-effectiveness. Let’s take a closer look at them in the table below.

ParametersIaaSServerless Computing
DefinitionIaaS is a cloud model whereby enterprises pre-purchase servers and other computing resources from cloud providers based on estimated resource requirements for a specific billing cycle, e.g. per month, week, or day.Serverless computing is a cloud computing model in which a cloud provider provisions the underlying infrastructure needed to deliver computing services on demand in response to events in a customer’s app.
DeploymentApps are deployed by the customer on infrastructure that is jointly managed by the service provider and the customer.Apps are deployed by the user on infrastructure managed by service providers.
ScalabilityAs users must purchase units of resources before, real-time scaling in response to traffic surges is impossible.Offers near-limitless real-time scalability.
CostNot always cost-effective, as enterprises often end up either over-provisioning cloud resources, which results in them paying for idle capacity, or under-provisioning them, which results in performance degradation.Highly cost-efficient, because service providers autoscale resources up or down in response to traffic or code events so users do not pay for idle capacity.

Benefits of Serverless Computing

Serverless speeds up app deployment, eradicates the stress of provisioning and managing cloud resources, and reduces cloud waste.

  • Reduced ops: Monitoring and maintenance operations can bog down developers, leading to lower productivity. Serverless abstracts this, allowing DevOps engineers to focus on other critical tasks.
  • Improved resource management: Serverless automatically scales to zero or one instance depending on the cloud vendor and enterprise user. Automatic scaling ensures no resource is wasted, and saved resources are better invested.
  • Cost savings: Serverless dramatically reduces the total cost of ownership (TCO) for enterprises, as they do not have to worry about setting up their own servers.
  • Faster TTM: Unlike traditional cloud computing models where deployment can take months, in serverless,software code can be executed and shipped in a few clicks since infrastructure deployment and management are already taken care of. Teams can build, test, patch, and release apps in record time.
  • High availability: Many serverless cloud vendors offer high availability with SLAs, allowing enterprise users to achieve optimal availability with minimal configuration and monitoring. This potentially results in a better end-user experience (UX) and higher customer conversion than a traditional cloud model.

Downsides of Serverless Computing

Though the benefits are quite substantial, outsourcing server management can have some downsides.

  • Vendor lock-in: Serverless creates dependence on cloud vendors. Unfortunately, switching vendors can be daunting as offerings and service delivery methods often vary across providers. To handle this, choose a provider that offers the required performance levels and features for your use case, including integrations with critical third-party tools. For example, if you’re using a serverless database, integration with third-party data tiering and streaming tools like Apache Kafka is key.
  • Latency: Most serverless services enable scaling to zero. However, the tech stack deployed when scaling to zero introduces a latency challenge known as a cold start. A cold start is the time delay between when the first request arrives (after scaling to zero) and when the infrastructure needed to process the request is spun up. As webpage load time is measured in milliseconds, cold starts that take up to 1-2 seconds can frustrate end users into switching to your competitors. You can prevent this by choosing a vendor that offers an exceedingly low cold start—say, 0.5 ms—and a keep-warm strategy for use cases where zero scaling may be inappropriate, such as stock exchange apps.
  • Debugging and troubleshooting complexities: Serverless, with its event-driven, ephemeral architecture, does not provide DevOps teams with a holistic view of their apps. This can make identifying the source of performance bottlenecks or security incidents tedious. DevOps teams can adopt distributed tracing to ease troubleshooting and debugging. Enterprises can also choose reputable service providers, implement secure coding practices, and monitor and log frontend activity to minimize the risk of performance degradation and security incidents.

Conclusion

The serverless architecture, where cloud service providers take on the role of server installation and maintenance, offers numerous benefits to enterprise users in various industries, including the health, financial, e-commerce, and tech services sectors. It reduces engineering teams’ maintenance tasks.

Gcore’s suite of fully managed serverless services—Gcore FaaS, CaaS, and FastEdge—offers highly competitive pricing and 99.9% availability. Gcore’s servers are provisioned in multiple regions close to end users, which reduces load times. Request demos today to see the Gcore FaaS, CaaS, and FastEdge in action.

Explore Gcore FastEdge

Subscribe to our newsletter

Stay informed about the latest updates, news, and insights.