Content
In earlier installments of this series, we explored foundational steps to establish a serverless project, including setting up a development environment with emulation tools and outlining a general multi-tier architecture. This segment focuses on implementing design patterns tailored to developing a serverless system adhering to the Function as a Microservice (FaaM) paradigm.
The question of service granularity has evolved alongside the history of distributed computing. In the 1970s and 80s, early integration approaches such as Remote Procedure Calls (RPC), CORBA, and Distributed Computing Environment (DCE) emerged but were often limited in scalability and flexibility for hybrid environments. The mid-1990s marked a turning point with the advent of Service Oriented Architecture (SOA), which gained prominence by the early 2000s through SOAP-based implementations. SOA revolutionized software design by enabling loosely coupled services. The evolution continued in the 2010s with REST, Event-Driven Architectures (EDA), and Microservices, but a persistent challenge remains: determining the ideal size and scope of a service.
Historically, SOA defined two main types of service granularity: coarse-grained and fine-grained. Coarse-grained services encompass broad business capabilities, while fine-grained services zero in on narrowly scoped functions. Both extremes pose challenges; large coarse-grained services risk becoming unwieldy "monoliths," while fine-grained services can produce an overwhelming number of small, interdependent components, complicating scalability, modifiability, and security. The Service-Oriented Modeling Framework (SOMF) offers nuanced categories: Atomic services are basic, indivisible components with limited processes; Composite services aggregate atomic or other composite services in hierarchical structures; and Clusters are groups of related services collaborating toward broader solutions.
Translating these concepts into the realm of serverless computing with Serverless Framework and AWS Lambda involves understanding three central constructs: Events, Functions, and Services. Events trigger functions, which execute discrete pieces of code performing specific tasks. Services encapsulate groups of Lambda functions along with their triggering events and infrastructure requirements. The Serverless Framework typically recommends creating individual Lambda functions for each CRUD operation—implying that an entity like User could correspond to four Lambda functions. While straightforward, this approach scales poorly: ten entities lead to 40 functions, each requiring management and associated with separate AWS API Gateway instances, increasing operational overhead.
To address this complexity, the Function as a Microservice (FaaM) pattern is proposed. Here, each Lambda function acts as a small cluster, handling multiple related business capabilities and routing various atomic events internally. For example, rather than one function per city operation, a geolocation function manages CRUD operations for countries, cities, and regions collectively. To prevent monolithic bloat, each function should ideally manage no more than 7±2 entities, in line with cognitive limits described by Miller’s Law. This design assumes that each function maintains sole responsibility for its specific data source.
Implementing FaaM within AWS Lambda requires efficient routing mechanisms, given the constraint that each Lambda function has a single entry handler. A Client-Dispatcher-Server pattern can facilitate this by directing incoming requests based on their URI paths. For instance, an HTTP request to /movie should route to the relevant function handling movie-related operations. Using this pattern, multiple logical services—such as users, geolocation, and posts—can share the same handler code but differentiate processing internally through routing. This approach reduces the number of Lambda functions and API Gateway endpoints, simplifying management while preserving clear business domain boundaries.
In practice, the Serverless Framework configuration might define several services with multiple HTTP events all pointing to the same handler—for example, login, logout, and signup endpoints under users, various location endpoints under geolocation, and CRUD operations for movies under posts. The routing logic inside the shared handler inspects the event’s URI and method to invoke the appropriate business logic module. This strategy enhances maintainability by avoiding repetitive handler code and enables modular growth within serverless architectures.