Exploring Cloud PaaS Services: Benefits, Drawbacks, and a Comparison of AWS, Azure, and GCP PaaS Solutions

Cloud Platform-as-a-Service (PaaS) is a type of cloud computing service that provides a platform for developers to build and deploy applications without the need for infrastructure management. PaaS solutions provide a complete platform for application development, testing, deployment, and scaling. In this article, we will discuss the benefits and drawbacks of PaaS, the problems it solves, and compare PaaS services offered by the top cloud providers, namely AWS, Azure, and GCP.

Benefits of PaaS:

Easy to use: PaaS makes it easy for developers to create and deploy applications, as the underlying infrastructure is abstracted away. Developers can focus on writing code and developing applications, without worrying about infrastructure management.

Reduced time-to-market: PaaS solutions provide pre-configured environments, libraries, and development tools, which makes it easy for developers to create and deploy applications quickly.

Scalability: PaaS platforms can easily scale up or down to meet the changing demands of the application. This makes it easy for businesses to handle traffic spikes, seasonal loads, and other sudden changes.

Cost-effective: PaaS solutions are usually offered on a pay-as-you-go model, which means businesses only pay for the resources they use. This can be more cost-effective than building and maintaining an in-house infrastructure.

Security: PaaS providers are responsible for the security of the underlying infrastructure, which means businesses can focus on securing their applications and data.

Drawbacks of PaaS:

Limited customization: PaaS solutions provide pre-configured environments, which may not be suitable for all applications. Businesses may have to compromise on customization and flexibility.

Dependency on the PaaS provider: PaaS solutions are tied to the provider’s infrastructure and services. Businesses may have to change their applications if they decide to switch providers.

Vendor lock-in: PaaS solutions can lead to vendor lock-in, as businesses may find it difficult to migrate their applications to a different provider or to an on-premises infrastructure.

Limited control: PaaS solutions abstract away the underlying infrastructure, which means businesses may have limited control over the infrastructure and services.

Problems solved by PaaS:

Infrastructure management: PaaS solutions abstract away the underlying infrastructure, which means businesses do not have to worry about infrastructure management.

Development and deployment: PaaS solutions provide pre-configured environments, libraries, and development tools, which makes it easy for developers to create and deploy applications.

Scalability: PaaS solutions can easily scale up or down to meet the changing demands of the application.

Security: PaaS providers are responsible for the security of the underlying infrastructure, which means businesses can focus on securing their applications and data.

AWS PaaS services:

AWS Elastic Beanstalk: Elastic Beanstalk is a PaaS solution that supports popular programming languages such as Java, .NET, PHP, Node.js, Python, Ruby, and Go. Elastic Beanstalk provides pre-configured environments, load balancing, auto-scaling, and monitoring.

AWS Lambda: Lambda is a serverless computing service that runs code in response to events and automatically scales up or down to meet the demands of the application. Lambda supports popular programming languages such as Java, .NET, Node.js, Python, Ruby, and Go.

Azure PaaS services:

Azure App Service: App Service is a PaaS solution that supports popular programming languages such as .NET, Java, Node.js, PHP, and Python. App Service provides pre-configured environments, load balancing, auto-scaling, and monitoring.

Azure Functions: Functions is a serverless computing service that runs code in response to events and automatically scales up or down to meet the demands of the application. Functions supports popular programming languages such as C#, Java, JavaScript, Python, and PowerShell.

GCP PaaS services:

Google App Engine: App Engine is a PaaS solution that supports popular programming languages such as Java, Python, PHP, Go, and Node.js. App Engine provides pre-configured environments, load balancing, auto-scaling, and monitoring.

Google Cloud Functions: Cloud Functions is a serverless computing service that runs code in response to events and automatically scales up or down to meet the demands of the application. Cloud Functions supports popular programming languages such as Node.js, Python, and Go.

Comparison of AWS, Azure, and GCP PaaS services:

Ease of use: All three cloud providers offer PaaS solutions that are easy to use and provide pre-configured environments, load balancing, auto-scaling, and monitoring.

Cost-effectiveness: All three cloud providers offer PaaS solutions on a pay-as-you-go model, which makes them cost-effective. However, pricing may vary depending on the specific service and usage.

Customization and flexibility: GCP App Engine offers more customization and flexibility than AWS Elastic Beanstalk and Azure App Service, as it allows developers to use custom runtimes and libraries.

Serverless computing: AWS Lambda and Azure Functions offer more serverless computing options than GCP Cloud Functions, as they support more programming languages and have more advanced features.

Vendor lock-in: All three cloud providers may lead to vendor lock-in, as businesses may find it difficult to migrate their applications to a different provider or to an on-premises infrastructure.

Based on the above comparison, the best PaaS service depends on the specific requirements and preferences of the business. However, in terms of popularity and breadth of offerings, AWS Elastic Beanstalk and Azure App Service are the most widely used PaaS solutions.

In conclusion, PaaS solutions provide a platform for developers to build and deploy applications without the need for infrastructure management. PaaS solutions offer benefits such as ease of use, reduced time-to-market, scalability, cost-effectiveness, and security. However, PaaS solutions also have drawbacks such as limited customization, dependency on the PaaS provider, vendor lock-in, and limited control. AWS, Azure, and GCP offer PaaS services that are easy to use and cost-effective, but differ in terms of customization, serverless computing options, and vendor lock-in.

Single Page Application — With Worker Process

What is Worker Process

A worker process is a type of computer process that runs in the background and performs tasks as assigned by a main process. It is typically used to offload tasks from a main process, allowing the main process to continue with other operations while the worker process handles the task in the background.

This article constitutes a component of a comprehensive collection of web architecture pattern references.

Single Page Application Frontend
Backend for SPA and Mobile App (Managed Services)
Backend for SPA and Mobile App (Serverless)
Multi Page Application (Integrated Web)
Single Page Application with Worker Process

How it works

In cloud computing, worker processes can be implemented using messaging or queues. The main process adds tasks to a queue, and worker processes monitor the queue for new tasks. When a worker process finds a new task in the queue, it takes the task and performs the work. This ensures that tasks are performed in the order in which they were received and that multiple worker processes can operate in parallel to handle a large number of tasks.

Examples of messaging systems used for worker process implementation in the cloud include RabbitMQ, Apache Kafka, and Amazon Simple Queue Service (SQS). The messaging system provides a way for the main process to communicate with the worker processes, allowing tasks to be dispatched and results to be returned.

Components

The main components of a worker process can include:

  • Task queue: A task queue is used to store the tasks that need to be performed. The worker processes monitor the queue for new tasks.

  • Task dispatcher: The task dispatcher is responsible for allocating tasks to the worker processes. It ensures that tasks are dispatched to the worker processes in a fair and efficient manner.

  • Worker processes: The worker processes are the actual processes that perform the tasks. They receive tasks from the task queue, perform the work, and return the results.

  • Result storage: A result storage mechanism is used to store the results of the tasks performed by the worker processes. This can be a database, a file system, or another type of data storage.

  • Error handling: A worker process should have error handling mechanisms in place to deal with unexpected errors that may occur during task processing. This can include retrying failed tasks or logging the error for later review.

  • Monitoring and reporting: A monitoring and reporting system is used to monitor the performance of the worker processes and provide feedback to the main process. This can include information about the number of tasks processed, the processing time for each task, and any errors that may have occurred.

Architecture

There are multiple architecture patterns to implement a Worker Process, and those are totally dependant on use cases.

Use Case 1

An example use case can be sending newsletters to all subscribed users at 10 AM.

  • Application has to execute some long running jobs on specific intervals.
  • Application should not use any compute resources if it’s not executing any jobs
  • Jobs should be triggered based some external or internal events or from schedulers

In this use case we can utilize AWS EventBridge Scheduler to trigger the jobs on specific intervals. AWS ECS Tasks can be used to execute the jobs, and once the job is done these tasks will end the execution and de-allocate the compute resources. We don’t require any Messaging/Queue services in this case(will cover that in the next use case).

The following diagram explains the architecture of the worker process along with other important components.

Single Page Application with Worker Process (Use Case 1)

Use Case 2

An example use case can be sending registration welcome messages to users after their successful registration on the platform

  • After completing the registration process, a message object with userId will be pushed to a Queue(eg: SQS).
  • A Lambda configured as an SQS trigger will be triggered and send the email to the user.
  • Update database if required.
  • As lambda is a serverless component, the costing will be calculated based on number of executions and time.

The following diagram explains the architecture of the worker process along with other important components.

Single Page Application with Worker Process (Use Case 2)

Previous – Multi Page Application (Integrated Web)

Home

Multi Page Application

Introduction

A Multi-Page Application (MPA) is a traditional web application that consists of multiple separate pages, with each page being loaded in full when a user navigates to it. MPAs are typically built using server-side technologies such as PHP, Ruby on Rails, or .NET, and rely on server-side rendering to generate HTML that is sent to the browser.

Each page in an MPA is a self-contained unit that operates independently of the other pages, and navigation between pages involves making a full round trip to the server. This can result in slower page load times, as the entire page must be reloaded every time the user navigates to a new page.

This article constitutes a component of a comprehensive collection of web architecture pattern references.

Single Page Application Frontend
Backend for SPA and Mobile App (Managed Services)
Backend for SPA and Mobile App (Serverless)
Multi Page Application (Integrated Web)
Single Page Application with Worker Process

How it works

Multi-page application (MPA) is a type of web application that consists of multiple pages served from a server to a client’s web browser, where each page is a distinct URL. These pages are generated dynamically on the server-side and then rendered on the client-side, with each page request triggering a round-trip to the server.

When a user requests a page, the server responds with the HTML, CSS, and JavaScript necessary to render the page, which is then displayed in the user’s web browser. Navigation between pages is achieved by the user clicking on links or by JavaScript code dynamically updating the URL and triggering a new page request to the server.

Components

The main components of a Multi-page application (MPA) are:

  • Server: A server that runs a server-side language, such as PHP, Node.js, or Ruby, and is responsible for handling HTTP requests and generating dynamic content.

  • Client: A web browser that displays the dynamic content generated by the server, such as HTML, CSS, and JavaScript.

  • Router: A component that handles navigation between pages and updates the URL to reflect the current page.

  • HTML, CSS, and JavaScript: The technologies used to create the user interface and dynamic behavior of the pages.

  • Database: A database that stores the application’s data, such as user information, product catalog, and order history. The server accesses this data to generate dynamic content for each page.

  • APIs: Interfaces that allow the server and client to exchange data, such as REST APIs that enable the client to retrieve data from the server and send data to the server.

These components work together to provide the user with a seamless experience as they navigate the application, with each page request triggering a round-trip to the server and a rendering of the updated content in the client’s web browser.

Architecture

All the above components are served from a single application

Backend

There are different solutions and services to host the MPA Backend on cloud, here are some of the popular:

  • VM (eg: EC2)
  • Container Services (eg: ECS)
  • Managed Services (eg: Elastic Beanstalk)
  • Serverless (eg: Lambda)
  • Kubernetes (eg: EKS)

Among the above list, we’re covering only Managed Services.

Managed Services

Using managed services like Amazon Web Services (AWS) Elastic Beanstalk and Microsoft Azure App Service provides several benefits, including:

  • Simplified deployment and management: Managed services take care of infrastructure management and deployment, allowing developers to focus on writing code and delivering features.

  • Automated scaling: Managed services automatically scale application instances based on demand, providing high availability and performance without manual intervention.

  • Reduced operational overhead: By outsourcing infrastructure management, organizations can reduce operational overhead and focus on delivering value to their customers.

  • Improved security: Managed services provide built-in security features, such as automatic patching and secure data storage, reducing the security burden on organizations.

  • Easy integration with other services: Managed services integrate seamlessly with other services, allowing organizations to leverage the full suite of cloud services to build and deploy their applications.

  • Cost-effectiveness: Managed services offer a cost-effective solution for deploying and managing applications, with flexible pricing models and the ability to pay for only the resources used.

  • Global availability: Managed services are designed to be globally available, providing fast and reliable access to applications from anywhere in the world.

MPA Architecture (Managed Services)

Next – Single Page Application with Worker Process

Previous – Backend for SPA and Mobile App (Serverless)

Home

Backend for SPA and Mobile App (Serverless)

Introduction

Single Page Applications (SPAs) and mobile apps can use APIs to communicate with a server and retrieve or manipulate data. This can be used to display data on the frontend, or to send data to the server to be stored.

This article constitutes a component of a comprehensive collection of web architecture pattern references.

Single Page Application Frontend
Backend for SPA and Mobile App (Managed Services)
Backend for SPA and Mobile App (Serverless)
Multi Page Application (Integrated Web)
Single Page Application with Worker Process

API Technologies

For the backend of a mobile app or SPA, various API technologies can be used, including:

  • REST (Representational State Transfer): This is a popular, lightweight, and scalable API technology that uses HTTP requests to perform operations on resources.

  • GraphQL: This is a newer API technology that provides a more flexible and efficient alternative to REST, allowing clients to request only the data they need.

  • gRPC: This is a high-performance, open-source framework for building scalable APIs. It uses the Protocol Buffers data format and supports a variety of programming languages.

  • SOAP (Simple Object Access Protocol): This is an XML-based protocol for exchanging structured information in the implementation of web services.

These API technologies can be used to create APIs that can be consumed by mobile apps or SPAs. The choice of technology depends on the specific requirements of the project, such as performance, security, and data exchange requirements.

Architecture

The architecture for all the above API technologies will be almost the same. We are considering the REST API for now as that is the most popular option.

There are different solutions and services to host the SPA Backend on cloud, here are some of the popular:

  • VM (eg: EC2)
  • Container Services (eg: ECS)
  • Managed Services (eg: Elastic Beanstalk)
  • Serverless (eg: Lambda)
  • Kubernetes (eg: EKS)

Among the above list, we’re covering only Serverless and Managed Services.

Serverless

Using a serverless architecture has several benefits, including:

  • Cost-effectiveness: Serverless architectures allow you to pay only for the resources you actually use, making it a cost-effective solution for both small and large scale applications.

  • Scalability: Serverless architectures automatically scale to meet changing demand, eliminating the need for manual scaling and allowing organizations to focus on delivering value to their customers.

  • Reduced operational overhead: By outsourcing infrastructure management to the cloud provider, organizations can reduce operational overhead and focus on writing code and delivering features.

  • Improved time-to-market: Serverless architectures allow developers to focus on writing code, rather than managing infrastructure, improving time-to-market for new features and applications.

  • Flexibility: Serverless architectures provide a flexible and modular approach to building applications, allowing organizations to easily modify and extend their applications as needed.

  • High availability: Serverless architectures are designed to be highly available, with automatic failover and replication to ensure that applications remain available even in the event of a failure.

  • Improved security: Serverless architectures provide built-in security features, such as automatic patching and secure data storage, reducing the security burden on organizations.

  • Easy integration with other services: Serverless architectures integrate seamlessly with other cloud services, allowing organizations to leverage the full suite of cloud services to build and deploy their applications.

Backend Architecture (Serverless)

A complete architecture diagram with frontend and backend would be as follows:

SPA Architecture (Serverless)

Next – Multi Page Application (Integrated Web)

Previous – Backend for SPA and Mobile App (Managed Services)

Home

Backend for SPA and Mobile App (Managed Services)

Introduction

Single Page Applications (SPAs) and mobile apps can use APIs to communicate with a server and retrieve or manipulate data. This can be used to display data on the frontend, or to send data to the server to be stored.

This article constitutes a component of a comprehensive collection of web architecture pattern references.

Single Page Application Frontend
Backend for SPA and Mobile App (Managed Services)
Backend for SPA and Mobile App (Serverless)
Multi Page Application (Integrated Web)
Single Page Application with Worker Process

API Technologies

For the backend of a mobile app or SPA, various API technologies can be used, including:

  • REST (Representational State Transfer): This is a popular, lightweight, and scalable API technology that uses HTTP requests to perform operations on resources.

  • GraphQL: This is a newer API technology that provides a more flexible and efficient alternative to REST, allowing clients to request only the data they need.

  • gRPC: This is a high-performance, open-source framework for building scalable APIs. It uses the Protocol Buffers data format and supports a variety of programming languages.

  • SOAP (Simple Object Access Protocol): This is an XML-based protocol for exchanging structured information in the implementation of web services.

These API technologies can be used to create APIs that can be consumed by mobile apps or SPAs. The choice of technology depends on the specific requirements of the project, such as performance, security, and data exchange requirements.

Architecture

The architecture for all the above API technologies will alomost same. We are considering the REST API for now as that is the most popular option.

There are different solutions and services to host the SPA Backend on cloud, here are some of the popular:

  • VM (eg: EC2)
  • Container Services (eg: ECS)
  • Managed Services (eg: Elastic Beanstalk)
  • Serverless (eg: Lambda)
  • Kubernetes (eg: EKS)

Among the above list, we’re covering only Serverless and Managed Services.

Managed Services

Using managed services like Amazon Web Services (AWS) Elastic Beanstalk and Microsoft Azure App Service provides several benefits, including:

  • Simplified deployment and management: Managed services take care of infrastructure management and deployment, allowing developers to focus on writing code and delivering features.

  • Automated scaling: Managed services automatically scale application instances based on demand, providing high availability and performance without manual intervention.

  • Reduced operational overhead: By outsourcing infrastructure management, organizations can reduce operational overhead and focus on delivering value to their customers.

  • Improved security: Managed services provide built-in security features, such as automatic patching and secure data storage, reducing the security burden on organizations.

  • Easy integration with other services: Managed services integrate seamlessly with other services, allowing organizations to leverage the full suite of cloud services to build and deploy their applications.

  • Cost-effectiveness: Managed services offer a cost-effective solution for deploying and managing applications, with flexible pricing models and the ability to pay for only the resources used.

  • Global availability: Managed services are designed to be globally available, providing fast and reliable access to applications from anywhere in the world.

Backend Architecture (Managed Services)

A complete architecture diagram with frontend and backend would be as follows:

SPA Architecture (Managed Services)

Next – Backend for SPA and Mobile App (Serverless)

Previous – Single Page Application Frontend

Home

Single Page Application — Frontend

Introduction

Single Page Application (SPA) is a web application that fits on a single web page and provides a seamless user experience by dynamically updating the content within that page, without the need for full page reloads. This results in a faster and more responsive user interface.

This article constitutes a component of a comprehensive collection of web architecture pattern references.

Single Page Application Frontend
Backend for SPA and Mobile App (Managed Services)
Backend for SPA and Mobile App (Serverless)
Multi Page Application (Integrated Web)
Single Page Application with Worker Process

How it works

Initial Load: When a user accesses the SPA, the initial HTML, CSS, and JavaScript files are loaded into the browser.

Dynamic Updates: Subsequent user interactions, such as clicking a link or submitting a form, do not result in a full page reload. Instead, the SPA dynamically updates the content within the same page using JavaScript, making API calls to the server as necessary to retrieve or update data.

URL Updates: The SPA also updates the URL in the browser’s address bar to reflect the current state of the application, without reloading the page. This allows users to bookmark or share a specific state of the application and enables the back and forward buttons to work as expected.

Components

SPA mainly consists of the following components:

  • HTML, CSS, and JavaScript: The HTML provides the structure of the page, the CSS provides the styling, and the JavaScript provides the dynamic behavior and interactivity.
  • Routing: The SPA manages navigation between different views or states by updating the URL and content within the same page, without full page reloads.
  • API Calls: The SPA makes API calls to the server as necessary to retrieve or update data, without reloading the page.
  • Data Management: The SPA manages data storage and retrieval, including client-side storage (such as local storage or session storage) and server-side API calls.
  • Client-side Template Engine: The SPA uses a client-side template engine, such as Handlebars or Mustache, to dynamically update the content within the page.
  • JavaScript Framework: The SPA often uses a JavaScript framework, such as Angular, React, or Vue.js, to provide a structure for building the application and abstracting away the underlying details of the DOM, API calls, and data management.

Architecture

All the above components can be put into three buckets:

Frontend

There are different solutions and services to host the SPA Frontend on cloud:

  • Static Storage + Content Delivery Network (eg: S3 + CloudFront)
  • VM (eg: EC2)
  • Container Services (eg: ECS)
  • Managed Services (eg: Elastic Beanstalk)

Among the above list, Static Storage + Content Delivery Network is the scalable and cost effective solution. Hosting a Single Page Application (SPA) in Amazon Simple Storage Service (S3) and Amazon CloudFront provides several benefits, including:

  • Cost-effectiveness: S3 and CloudFront are highly cost-effective solutions for hosting static websites, and SPA’s are typically built using mostly static assets.
  • Scalability: S3 and CloudFront are designed to scale to accommodate high traffic volumes, ensuring that your SPA will remain available and performant even during periods of high demand.
  • Global availability: CloudFront has a global network of edge locations that serve content to users with low latency and high throughput, making it ideal for delivering SPAs to users around the world.
  • Security: CloudFront integrates with AWS security services like AWS Web Application Firewall (WAF) and Amazon Route 53, providing a secure and scalable solution for hosting SPAs.
  • High performance: CloudFront uses caching and content delivery optimization techniques to deliver content to users as quickly as possible, improving the performance of your SPA.
  • Easy deployment and management: S3 and CloudFront provide a simple and flexible solution for deploying and managing your SPA, with features like versioning, automated backups, and easy integration with other AWS services.

The architecture diagram for an SPA would look like below:

SPA Frontend

Next – Backend for SPA and Mobile App (Managed Services)

Home

Architecture References for Web Apps

Introduction

This document provides a comprehensive guide to common and standard architecture patterns used in modern software development. The purpose of this document is to serve as a reference for developers and tech leads in IT industry, and to provide a common understanding of the different architecture patterns that can be used to design, build, and deploy scalable and reliable applications.

The document includes 5 to 6 different types of architecture patterns that have proven to be effective and efficient for a variety of use cases. Each architecture pattern includes a high-level overview, common use cases, and considerations for implementation, making it easier for you to understand the strengths and weaknesses of each pattern and choose the right one for your specific needs.

Whether you are designing a new application, refactoring an existing one, or simply looking to improve your understanding of architecture patterns, this document is an essential resource. By providing a comprehensive overview of the most common and effective architecture patterns, it helps you make informed decisions and ensures that your applications are built to meet the demands of modern users and the ever-evolving technology landscape.

Architecture Patterns

Here are the initial few architecture patterns we have shortlisted. We plan to add more patterns in the future.

Cloud Services

The architecture diagrams are created utilizing AWS components, however, alternative services for Azure and GCP are also mentioned here.

Service Type AWS Azure GCP
Virtual Machine EC2 Azure VM Compute Engine
Serverless (FaaS) Lambda Azure Functions Cloud Functions
Queue SQS Azure Service Bus Cloud Tasks
Job scheduling EventBridge Azure Scheduler Cloud Scheduler
Container Services ECS ACI Cloud Run
PaaS (Managed Service) Elastic Beanstalk App Service App Engine
RDBMS RDS Azure Database Cloud SQL
NoSQL DynamoDb CosmosDb Firestore
Load Balancing ELB ALB Cloud Load Balancing
API Management API Gateway API Management API Gateway
CDN CloudFront Azure Content Delivery Network Cloud CDN
Static Storage S3 Azure Blob Storage Cloud Storage

Next – Single Page Application Frontend

Home

CQRS and Mediator Design Patterns in .Net 6

Introduction

CQRS

CQRS stands for Command and Query Responsibility Segregation, a design pattern that separates read and update operations for a data store. Implementing CQRS in your application can maximize its performance, scalability, and security. The flexibility created by migrating to CQRS allows a system to better evolve over time and prevents update commands from causing merge conflicts at the domain level. I’ve posted an article explaining how a CQRS pattern can be used to scale the MySQL database horizontally

Mediator

Mediator design pattern is one of the important and widely used behavioral design patterns. Mediator enables decoupling of objects by introducing a layer in between so that the interaction between objects happens via the layer. If the objects interact with each other directly, the system components are tightly-coupled with each other that makes higher maintainability cost and not hard to extend. Mediator pattern focuses on providing a mediator between objects for communication and help in implementing loose-coupling between objects.

Problem

In traditional architecture, we use the same database for query and update operations. That is simple and works well for basic CRUD operations. In more complex applications, however, this approach can become unmanageable. Then you start refactoring your code and try to separate read and update calls probably by implementing the CQRS pattern. In the DotNet world developers use one service to manage everything related to one specific entity, but if you implement CQRS pattern then you need multiple services for queries and commands and if you inject all these dependencies using Dependency Injection, then there will be lot of services in a simple Controller.

Solution

Mediator pattern can help you resolve the above problem. One mediator class or library can call the required commands and query services based on the input models. So you just need to inject one Interface and that interface will manage further dependencies. We use Dependency Injection to make our application loosely coupled, but the Mediator pattern will make this further de-coupled and simplified.

Package

The MediatR Nuget package can be used to implement the Mediator pattern in .Net. You can use the below commands to install the required packages

    dotnet add package MediatR    dotnet add package MediatR.Extensions.Microsoft.DependencyInjection

We will be using Dapper Micro ORM in this application to do the database operations. I’ll be explaining about the Dapper in a separate article. Install Dapper by running the below command

    dotnet add package Dapper

MediatR mainly uses two interface to implement the Mediator pattern

  • IRequest<T>
  • IRequestHandler<T, U>

More details can be found here

How

I’m going to define a folder structure and naming convention as follows but feel free to use your own conventions.

  • Commands For all commands(CUD Operations), these are POCO classes implements IRequest<T>
  • CommandHandlers All the business logic to execute the commands
  • Queries Same as commands but only for Read operations
  • QueryHandlers All the business logic to execute the queries

I’ve created two DbContexts called ToDoContextRead and ToDoContextWrite both pointing to the same database, but in a Production scenario you can use separate database connection strings for both DbContexts. More on that topic is mentioned here.

ToDoContextRead will look like the below

public class ToDoContextRead{    private readonly string _connectionString;    public ToDoContextRead(IConfiguration configuration)    {        _connectionString = configuration.GetConnectionString("SqlConnectionRead");    }    public IDbConnection CreateConnection()        => new SqlConnection(_connectionString);}

I’ve also created two Repositories for reading and writing to the databases

ToDoRepositoryRead will look like the below:

public class ToDoRepositoryRead : IToDoRepositoryRead{    private readonly ToDoContextRead _context;    public ToDoRepositoryRead(ToDoContextRead context)    {        _context = context;    }    public async Task<ToDo> GetToDoById(Guid id)    {        var query = "SELECT * FROM ToDos where id=@id";        var param = new { id };        using var connection = _context.CreateConnection();        var todo = await connection.QueryFirstOrDefaultAsync<ToDo>(query, param);        return todo;    }    public async Task<IEnumerable<ToDo>> GetToDos()    {        var query = "SELECT * FROM ToDos";        using var connection = _context.CreateConnection();        var todos = await connection.QueryAsync<ToDo>(query);        return todos.ToList();    }}

ToDoRepositoryWrite code is as follows:

public class ToDoRepositoryWrite : IToDoRepositoryWrite{    private readonly ToDoContextWrite _context;    public ToDoRepositoryWrite(ToDoContextWrite context)    {        _context = context;    }    public async Task<int> DeleteToDoById(Guid id)    {        var query = "DELETE FROM ToDos where id=@id";        var param = new { id };        using var connection = _context.CreateConnection();        return await connection.ExecuteAsync(query, param);    }    public async Task<ToDo> GetToDoById(Guid id)    {        var query = "SELECT * FROM ToDos where id=@id";        var param = new { id };        using var connection = _context.CreateConnection();        var todo = await connection.QueryFirstOrDefaultAsync<ToDo>(query, param);        return todo;    }    public async Task<IEnumerable<ToDo>> GetToDos()    {        var query = "SELECT * FROM ToDos";        using var connection = _context.CreateConnection();        var todos = await connection.QueryAsync<ToDo>(query);        return todos.ToList();    }    public async Task<int> SaveToDo(ToDo toDo)    {        var query = @"INSERT INTO ToDos                    (Id, Title, Description, Created, IsCompleted)                    VALUES (@Id, @Title, @Description, @Created, @IsCompleted);";        toDo.Created = DateTime.Now;        using var connection = _context.CreateConnection();        return await connection.ExecuteAsync(query, toDo);    }    public async Task<int> UpdateToDo(ToDo toDo)    {        var query = @"UPDATE ToDos SET                    Title=@Title, Description=@Description, Created=@Created, IsCompleted=@IsCompleted                    WHERE Id=@Id";        toDo.Created = DateTime.Now;        using var connection = _context.CreateConnection();        return await connection.ExecuteAsync(query, toDo);    }}

Commands and Queries

Now the important step is to create the Commands and Queries. Commands and Queries are simple DTOs or POCO classes, but in order to work with the MediatR library, we need to implement an Interface called IRequest<T>. A sample Command is shown below. All other commands and queries can be found in the Github Repo.

public class CreateToDoCommand : IRequest<ToDo>{    public string? Title { get; set; }    public string? Description { get; set; }    public CreateToDoCommand(string? title, string? description)    {        Title = title;        Description = description;    }    public CreateToDoCommand()    {    }}

In the above example, the ToDo class in the IRequest Interface is the return type. For each command or query there should be a Handler defined. Here is the Handler for CreateToDoCommand:

public class CreateToDoCommandHandler : IRequestHandler<CreateToDoCommand, ToDo>{    private readonly IToDoRepositoryWrite _toDoRepositoryWrite;    public CreateToDoCommandHandler(IToDoRepositoryWrite toDoRepositoryWrite)    {        _toDoRepositoryWrite = toDoRepositoryWrite;    }    public async Task<ToDo> Handle(CreateToDoCommand request, CancellationToken cancellationToken)    {        var todo = new ToDo        {            Created = DateTime.Now,            Description = request.Description,            Title = request.Title,            Id = Guid.NewGuid(),            IsCompleted = false        };        var result = await _toDoRepositoryWrite.SaveToDo(todo);        if (result > 0)        {            return todo;        }        else        {            throw new ArgumentException("Unable to save the ToDo");        }    }}

Similar way you can define all your commands, queries and handlers. Once that part is ready then you need to configure the Mediator service in the Program.cs file as follows:

    builder.Services.AddMediatR(typeof(ToDoContextRead).GetTypeInfo().Assembly);

In the above line ToDoContextRead is used just for getting the assembly, and this line will do all the magic to binding the commands and queries to the handlers.

Now you can inject the Mediator into your controller as follows:

private readonly IMediator _mediator;public ToDosController(IMediator mediator){    _mediator = mediator;}

Now you can call any handlers by simply sending the commands as follows

var todos = await _mediator.Send(new GetToDoDetailQuery { Id = id });

Complete code sample can be found at https://github.com/kannan-kiwitech/CqrsMediatorSampleApi.

Happy coding!

How to Scale an AWS RDS MySQL Database Horizontally?

What is scalability

The scalability of an application is the measure of the number of client requests it can simultaneously handle. When a hardware resource runs out and can no longer handle requests, it is counted as the limit of scalability. When this limit of the resource is reached, the application can no longer handle additional requests. To efficiently handle additional requests, administrators should scale the infrastructure by adding more resources such as RAM, CPU, storage, network devices, etc. Horizontal and vertical scaling are the two methods implemented by administrators for capacity planning.

What is Horizontal Scaling?

Horizontal scaling is an approach of adding more devices to the infrastructure to increase the capacity and efficiently handle increasing traffic demands. As the name says, horizontal scaling is about expanding the capacity horizontally by adding extra servers. The load and processing power are shared among multiple servers within a system using a load balancer. It is also called scaling out.

What is Vertical Scaling?

Vertical scaling is a type of scalability wherein more computing and processing power is added to a machine to increase its performance. Also called scale-up, vertical scaling allows you to increase the machine’s capacity while maintaining resources within the same logical unit. The processor, memory, storage, and network capacity are increased in this approach.

Scalability Issues of RDBMS (Specific to MySQL)

As we discussed earlier, vertical scalability has some hardware upper limits. Vertical scaling also requires some downtime. We cannot afford both in the database world. So we need to look into horizontal scalability options. In a database world, horizontal scaling is usually based on the partitioning of data (each partition only contains part of the data). Partitioning requires more effort and thought process in the development and design phase. That is a separate process and we’re not discussing that here.

Scaling MySQL Using Read Replicas

The read replica feature allows you to replicate data from MySQL server to one or more read-only servers. Replicas are updated asynchronously using the MySQL engine’s native binary log file position-based replication technology.

In this case, we will create a Master-Slave architecture and route all the write queries on the Master instance and all the read queries on the slave instance which are replicated from the Master. We can have multiple Slave instances running at one and scale our read operations horizontally. But the Master can only be scale Vertically. In most of the cases databases are read heavy, so this approach will work in most of the use cases.

Step 1: Application Development Considerations

While developing the application, we should follow the CQRS design pattern. CQRS stands for Command and Query Responsibility Segregation, a pattern that separates read and update operations for a data store. Implementing CQRS in your application can maximize its performance, scalability, and security. The flexibility created by migrating to CQRS allows a system to better evolve over time and prevents update commands from causing merge conflicts at the domain level. In short our application will have two connection strings, one for read operations and the another one for update operations.

As we have a single Master write node we can use that connection string for update operations and we will have multiple read nodes(slaves), so we need to setup a load balancer for that which will equally distribute the load among multiple read nodes.

Step 2: Setup Load Balancer for Read Only Nodes

We can set up Amazon Route 53 weighted record sets to distribute requests across your read replicas. Within a Route 53 hosted zone, create individual record sets for each DNS endpoint associated with your read replicas. Then, give them the same weight, and direct requests to the sub domain/endpoint of the record set.

How to Create Read Replicas

Assuming that you already have a MySQL RDS in your AWS account. Follow the below steps to create a read replica, and repeat the steps to create multiple replicas if required. In order to evaluate the load balancing feature, we should create at least 2 replicas.

  • Type rds in AWS Console search box and select RDS
  • Select Databases from the left panel
  • Select the database you want to create read replica on
  • Click on the Actions menu and select Create read replica as shown in the below screenshot.

  • Then select the Db Instance class
  • Select Publicly Accessible to Yes
  • Select the VPC Security Groups(You can select the same security group of your master node)
  • Enter the Db Instance Indentifier and then click ‘Create Read Replica`

Read replica will be created in a few minutes. Repeat the above steps to create one more node.

Create DNS Based Load Balancer

To create a DNS based load balancer, you have to set up a hosted zone in Route 53. Follow the below steps to create a hosted zone and record set.

  • Type route 53 in AWS Console search box and select Route 53 from the result.
  • Click Create Hosted Zone

  • Enter the Domain name, Description is optional
  • Select the Public hosted zone in Type option
  • Click Create hosted zone

  • Now we need to create Records in the newly created hosted zone
  • Select Create Record
  • Enter a subdomain name in the Name field
  • Select CNAME as Type
  • For Value enter the endpoint DNS name of the first read replica
  • For TTL value, set a value that is appropriate for your needs
  • For Routing Policy, choose Weighted
  • In the Weight field, enter a value. Be sure to use the same value for each replica’s record set
  • Provide an Id for the Record set
  • Repeat the steps to create records for all the replicas. Keep the same name (subdomain) for all the records.

Now your records would look like the below screenshot:

Update the NS records Entries in the Custom Name Server

Now copy the all four NS record values. You have to go to your Domain Registrar’s portal (godaddy, google domains etc.) and update the custom name server values there. In case of Google Domains, that will look like below. The changes may take some time to reflect.

Once your NS records are properly updated, you will be able to use the newly created subdomain as your read only database host name. You can use the same credentials of your master database to access the read only load balanced instances

Now you can configure your Master hostname for CUD operations and load balanced hostname for Read operations.

AWS Serverless and DynamoDb Single Table Design using .Net 6 – Part 2

Introduction

This is a continuation of the previous article AWS Serverless and DynamoDb Single Table Design using .Net 6 – Part 1. In this part we’re going to create a sample Serverless application using DynamoDb and deploy that on AWS Lambda.

Tools

Configure

Configure the AWS Toolkit using the link. While creating the IAM user make sure to attach the below policies

AWS Policies
Image: 1 

We’ll be using this user for creating the serverless application and deploying the same from Visual Studio or dotnet tools command line interface.

Why Serverless

Serverless solutions offer technologies for running code, managing data, and integrating applications, all without managing servers. Serverless technologies feature automatic scaling, built-in high availability, and a pay-for-use billing model to increase agility and optimize costs. These technologies also eliminate infrastructure management tasks like capacity provisioning and patching, so you can focus on writing code that serves your customers. Some of the popular serverless solutions are AWS Lambda and Azure Functions.

Development

AWS Toolkit for Visual Studio provides many built-in templates for creating AWS based serverless applications quickly.

Create a new project from Visual Studio and type ‘serverless’ in the search box and select `AWS Serverless Application (.NET Core – C#).

Image: 2

Enter the project name and continue.

Image: 3

Then select the ASP.NET Core Web API blueprint from the selection and click Finish

Image: 4

Once the project is ready in Visual Studio, you can see a file called serverless.template. This is AWS CloudFormation Serverless Application Model template file for declaring your Serverless functions and other AWS resources. Make sure to add two policies( AWSLambda_FullAccess and AmazonDynamoDBFullAccess) as shown below: These permissions are required for the Lambda to Read and Write to DynamoDb

Image: 5

Then add the below Nuget packages:

AWSSDK.DynamoDBv2Newtonsoft.JsonSwashbuckle.AspNetCore.SwaggerGenSwashbuckle.AspNetCore.SwaggerUI

Create an Interface called IEmployeeDb to define the methods

public interface IEmployeeDb{   Task<IEnumerable<EmployeeModel>> GetAllReporteesAsync(string empCode);   Task<EmployeeModel> GetEmployeeAsync(string empCode);   Task SaveAsync(EmployeeModel model);   Task SaveBatchAsync(List<EmployeeModel> models);}

Create a class to implement the IEmployeeDb interface. The constructor would look like the below:

public EmployeeDb(ILogger<EmployeeDb> logger, IWebHostEnvironment configuration){   //Comment out the below four line if you're not using the DynamoDb local instance.   if (configuration.IsDevelopment())   {      _clientConfig.ServiceURL = "http://localhost:8000";   }   _client = new AmazonDynamoDBClient(_clientConfig);   _context = new DynamoDBContext(_client);   _logger = logger;}

We configured the ServiceURL to point the localhost in case we’re using DynamoDb local instance. We also initialized the AmazonDynamoDBClient and DynamoDBContext. We’ll be mainly using the Highlevel API called DynamoDBContext for reading and writing data from DynamoDb.

The below methods are responsible for writing/saving the data:

public async Task SaveAsync(EmployeeModel model){   await SaveInDbAsync(GetUserModelForSave(PrepareEmpModel(model)));   await SaveInDbAsync(GetReporteeModelForSave(PrepareEmpModel(model)));}private async Task SaveInDbAsync(EmployeeModel model){   await _context.SaveAsync(model);   _logger.LogInformation("Saved {} successfully!", model.EmployeeCode);}private EmployeeModel PrepareEmpModel(EmployeeModel model){   model.EmployeeCode = model.EmployeeCode?.ToUpper();   model.ReportingManagerCode = model.ReportingManagerCode?.ToUpper();   return model;}

When saving a record, this method will actually insert two objects, one for user type and the other for reportee type. We discussed the reason and logic for creating two entries in the previous part.

In the below method we implemented the logic for fetching the employee by EmployeeCode:

public async Task<EmployeeModel> GetEmployeeAsync(string empCode){   var result = await _context.LoadAsync<EmployeeModel>(empCode.ToUpper(), empCode.ToUpper());   if (result != null)      result.ReportingManagerCode = ""; //ReportingManagerCode was same as EmployeeCode, so just remove it   return result;}

Next method will cover the logic for fetching the reportees by EmployeeCode:

public async Task<IEnumerable<EmployeeModel>> GetAllReporteesAsync(string empCode){   var config = new DynamoDBOperationConfig   {      QueryFilter = new List<ScanCondition> {         new ScanCondition("Type", ScanOperator.Equal, "Reportee"),         new ScanCondition("LastWorkingDate", ScanOperator.IsNull)      }   };   var result = await _context.QueryAsync<EmployeeModel>(empCode.ToUpper(), config).GetRemainingAsync();   return PrepareReporteeReturnModel(result); //swap the EmployeeCode and  ReportingManagerCode and return}

All the other code fragments and complete solution can be downloaded from the GitHub repository.

Once you complete the development you need to create a DynamoDb table in your AWS account. There are many ways to create a service in AWS. You can use CLI, Console, SDK or even Visual Studio Toolkit. Below is the CLI command for creating the table and setting up the pk and sk.

aws dynamodb create-table --table-name employees --attribute-definitions AttributeName=EmployeeCode,AttributeType=S         AttributeName=ReportingManagerCode,AttributeType=S --key-schema AttributeName=EmployeeCode,KeyType=HASH AttributeName=ReportingManagerCode,KeyType=RANGE --provisioned-throughput ReadCapacityUnits=1,WriteCapacityUnits=1 --table-class STANDARD

Now you can deploy the serverless application either using Visual Studio or dotnet tools. To deploy using Visual Studio, right click on the project and select the Publish to AWS Lambda button.

To deploy using dotnet tools you need to follow the below steps in the command line.

dotnet tool install -g Amazon.Lambda.Toolscd "AWSServerlessDynamoDb/AWSServerlessDynamoDb" #or whatever the folderdotnet lambda deploy-serverless

After successful deployment, you will get a Lambda endpoint(ApiURL)  as below:

Image: 6

You can access your SwaggerUI by adding /swagger in the above url and you can test the APIs.

Complete source code can be found here.

Happy coding!!