This article is the third in a series of guides on serverless computing. It explores Google Cloud Platform (GCP) serverless computing, details key GCP components, and explains the differences between GCP and other serverless platforms. This article further details the use cases and challenges of deploying serverless technologies, along with best practices for end-to-end monitoring that can help organizations maximize the benefits of serverless computing on GCP.
To learn more about serverless computing, read part one on Amazon Web Services (AWS) and part two on Microsoft Azure serverless platforms.
What are the advantages and disadvantages of serverless computing?
Serverless computing encompasses a range of managed, on-demand services that teams can utilize to develop and deploy applications without the complexities of managing internal infrastructure, such as servers and virtual machines (VMs). With serverless computing, scaling for cloud-based services and features is automatic and flexible, and costs are based on usage, such as requests, compute time, or events. This approach enables teams to focus on writing code, accelerating development, supporting event-driven architecture, and facilitating easier scaling to meet demand.
As discussed in the previous articles in this series, serverless applications also have their disadvantages. Serverless computing relies on microservices that often run on a complex, distributed backend. Because serverless applications are made up of decoupled microservices, troubleshooting can be challenging. Requests across cloud services and providers often involve a complex network of service calls through APIs, blob storage, event triggers, and other mechanisms. To address this complexity, developers can utilize end-to-end distributed tracing to visualize the entire request path and identify performance issues or bottlenecks along the way. Distributed tracing clarifies where an error happened and which team is responsible for fixing it.
An overview of serverless technologies across providers
Serverless technology providers offer a complex landscape of services and functionality. This section provides an overview of the services and functionality available across serverless platforms, including GCP, Amazon Web Services (AWS), and Microsoft Azure. (As an aside, teams can combine these described services to build and deploy serverless applications.)
Compute services
Compute services allow teams to run code without managing internal servers and infrastructure. Serverless platforms offer the following compute services:
Function-as-a-service (FaaS) solutions offer event-driven functions like capturing and moving data, messaging actions, and more. Examples include Google Cloud Functions, AWS Lambda, and Azure Functions.
Serverless containers run full applications that are packaged to run inside containers. Containers offer auto-scaling (depending on demand) and pay-per-use. Examples include Google Cloud Run, AWS Fargate, and Azure Container Apps.
Platform-as-a-service (PaaS) solutions offer lightweight serverless capabilities for web apps that automatically scale on demand. Examples include Google App Engine (Standard) and AWS Elastic Beanstalk (note that this is partially serverless).
Data and storage services
Data and storage services consist of serverless databases and storage that abstract demand away from an organization’s internal infrastructure and provide automatic scaling and cost.
Serverless database-as-a-service (DBaaS) examples include Google Cloud Firestore, Firebase Realtime Database, Amazon DynamoDB, and Azure Cosmos DB.
Object storage for serverless examples include Google Cloud Storage, Amazon Simple Storage Service (S3), and Azure Blob Storage.
Data warehousing and analytics examples include Google BigQuery, Amazon Athena, and the Snowflake serverless query model.
Integration and messaging
Integration and messaging for serverless computing enable event-driven architectures and microservices communication via the cloud.
Publish/subscribe (pub/sub) messaging provides asynchronous communication that decouples message producers (publishers) from message consumers (subscribers). Examples include Google Pub/Sub, AWS Simple Notification Service (SNS)/AWS Simple Queue Service (SQS), and Azure Service Bus.
Workflow orchestration manages steps and procedures. Examples include Google Workflows, AWS Step Functions, and Azure Logic Apps.
Back end–as-a-service (BaaS)
Back end–as-a-service (BaaS) technologies offer back-end building blocks that can be plugged into applications.
Authentication and identity examples include Firebase Authentication, Auth0, and Amazon Cognito.
APIs and GraphQL back-end examples include Firebase, AWS AppSync, Hasura, and Supabase.
Push notifications and real-time update examples include Firebase Cloud Messaging, PubNub, and Pusher.
AI/machine learning (ML) serverless services
AI/machine learning (ML) serverless services allow teams to build and deploy AI and ML applications without provisioning or managing underlying server infrastructure.
Pre-trained API examples include Google Cloud Vision/natural language processing (NLP)/Google Translate APIs, AWS Rekognition, and Azure Cognitive Services.
Serverless ML platform examples include Google Vertex AI and AWS SageMaker Serverless Inference.
Why is serverless computing important?
In broad terms, serverless computing involves decoupled microservices. By moving compute, communications, storage, and services out of an organization’s infrastructure, much of the overhead required to build and maintain these technologies, as well as scaling for demand, is shifted to distributed cloud services. Another advantage is that costs are associated with usage, as opposed to renting space, buying hardware, running cable, hiring staff, and so on.
How can DevOps best manage a serverless computing environment?
The complexities of serverless computing can make it challenging for DevOps and managed services to troubleshoot microservices running on a complex, distributed back end via service calls. It can be challenging to trace requests across an environment to identify the root cause of errors while maximizing efficiency. A dedicated monitoring solution for serverless computing not only provides end-to-end traceability for errors and latency but can also report on usage statistics and costs.
What are GCP serverless technologies?
GCP is a suite of cloud services and technologies that enable organizations to develop, test, and deploy applications that automatically scale based on demand, while reducing the costs typically associated with building and managing in-house infrastructure. The GCP portfolio covers every layer of modern application design and development within Google Cloud: compute, data storage and management, systems integration, back-end components, AI/ML, and DevOps. These services operate on the same global infrastructure that supports other Google online services such as Gmail, Google Photos, and YouTube.
GCP serverless components: In-depth
This section discusses the following serverless components in detail:
Cloud Functions (2nd Gen) are event-driven and operate as a FaaS solution. Cloud Functions are executed in a secure, isolated environment and are stateless, meaning they do not store data between invocations. Cloud Functions help build lightweight APIs that process data in real-time, handle webhooks, integrate with third-party services, and create microservices. Examples of their use include lightweight triggers, such as file uploads, Pub/Sub events, and webhooks.
Cloud Run: Cloud Run enables stateless containerized applications in a serverless environment. Containerized applications run within an isolated package of code, referred to as a container. These containers include all the dependencies an application might need to run on any host operating system (OS), such as libraries, configuration files, and frameworks, amalgamated into a single lightweight executable file. Cloud Run orchestrates the deployment and operation of containers that scale according to demand. Examples of their use include APIs, services, and background jobs.
Google Kubernetes Engine (GKE) Autopilot: GKE Autopilot is a fully managed, serverless mode that abstracts the underlying infrastructure and cluster management, allowing users to focus on deploying and managing their applications rather than infrastructure details. In Autopilot mode, Google manages the entire cluster infrastructure, including nodes, scaling, security configurations, and upgrades, based on the Kubernetes workload specifications provided by the user (including pods, deployments, and services).
Cloud Run Jobs: Designed for executing run-to-completion tasks that do not respond to HTTP requests, such as batch processing, database migrations, nightly reports, and other operational workloads, Cloud Run Jobs are specifically for tasks that perform work and then exit once finished. Examples include batch/one-off jobs, such as nightly extract, transform, and load (ETL) or bulk data processing.
Cloud Firestore/Firebase: The GCP real-time NoSQL database consists of two separate technologies, initially purchased as Datastore and later renamed Firestore, which Google is merging into a single service called Firebase. The integration between GKE Autopilot and Cloud Firestore allows applications running on GKE to interact with Firestore data. For example, an application deployed on GKE can use the Firestore client library to read and write data, enabling real-time updates and offline functionality.
Examples of application or data processes that can take advantage of GCP serverless components and services include:
Front-end and mobile developers: Development teams or single developers can build a mobile application minimum viable product (MVP) using Firebase and Cloud Functions.
Data engineers and analytics teams: Data analysis teams can collect data from multiple sources using Pub/Sub for event ingestion and Cloud Functions/Cloud Run for ETL tasks, flowing data into Google BigQuery dashboards.
DevOps and site reliability engineers: Cloud Functions can automate tasks such as clean up from workflows or pipelines or scaling operations that depend on usage. DevOps and site reliability engineering (SRE) teams can deploy Cloud Run for custom monitoring and alerting service tasks.
AI/ML practitioners: Cloud Run can be used for custom ML inference services, and it is beneficial for deploying demand-forecasting models via Vertex AI, scaling only when queries come in.
GCP serverless use cases
Consider the following use case examples for GCP serverless technologies:
Automated data processing. When a new image file is uploaded to a Cloud Storage bucket, a Cloud Run function can be triggered to generate optimized versions, such as thumbnails automatically, or to convert the image into different formats. The results are saved back to Cloud Storage, and the metadata in Firestore is updated. Similarly, Cloud Functions can process data as it arrives in Cloud Storage, Firestore, or Pub/Sub, making Cloud Functions ideal for ETL tasks, data validation, or transformation.
APIs and microservices. Serverless functions, like Cloud Functions or Cloud Run, can serve as lightweight, event-driven APIs for applications. For example, a serverless shopping list app can utilize Cloud Functions to handle create, read, update, and delete (CRUD) operations for data, with the backend automatically scaling to meet demand. Cloud Run is particularly well-suited for containerized microservices, allowing developers to deploy and manage applications without the need to provision or manage servers.
Scheduled tasks and automation can be managed using Cloud Scheduler and Cloud Run. A nightly job can be triggered to identify idle Google Compute Engine instances using the Google Compute Engine API and Cloud Monitoring, and to then automatically stop or delete those instances to reduce cloud costs, with all actions logged to Google Cloud Logging. Similarly, a daily Cloud Scheduler job can trigger a Cloud Run function to query a data warehouse in BigQuery, process the data into a report, and send it via an email API.
Batch inference run jobs process incoming data, such as daily prediction runs for fraud detection. For example, a financial institution needs to analyze a large volume of daily transactions to spot potential fraud using a pre-trained ML model.
Chatbot back ends support a conversational AI model that must scale up or down based on user demand. A typical use case for serverless chatbot back ends is to handle user interactions, process natural language, connect with external APIs, and manage conversation state.
What industry shifts have affected serverless compute?
Industry practices are shifting from FaaS models to more container-based platforms like Cloud Run. These shifts are motivated by rising demand for auto-scaling and pay-per-use pricing, all without needing to rewrite existing applications or being limited by programming languages and environments. Running containerized applications in Cloud Run ensures apps are not tied to a single vendor’s ecosystem, making migration and integration with existing continuous integration/continuous deployment (CI/CD) workflows easier.
Another significant shift is the growing focus on integrated DevOps and security. Teams now require modern serverless platforms to offer built-in features for continuous integration and deployment, traffic management, distributed tracing, and robust security measures, including vulnerability scanning, secret management, and supply chain integrity verification.
Monitoring for GCP serverless computing: what features should users look for?
Through end-to-end distributed tracing, developers can visualize the whole journey of a request and pinpoint any performance failures or bottlenecks. When considering a monitoring solution for serverless computing, consider the following features:
- Unified visibility across all serverless services (including compute, storage, messaging, and databases)
- Unified observability across multicloud and hybrid environments
- Pre-built dashboards stitching together metrics, logs, and traces
- Anomaly detection and forecasting using ML on serverless metrics
- Deeper function-level insights (such as cold starts and cost per function)
- Error and deployment tracking
- Enhanced metrics
Learn more
Datadog Serverless Monitoring offers comprehensive visibility into managed services powering serverless applications by aggregating real-time metrics, logs, and traces from serverless compute. Moreover, Datadog provides dedicated integrations with GCP services.
