Single Page Application — With Worker Process

What is Worker Process

A worker process is a type of computer process that runs in the background and performs tasks as assigned by a main process. It is typically used to offload tasks from a main process, allowing the main process to continue with other operations while the worker process handles the task in the background.

This article constitutes a component of a comprehensive collection of web architecture pattern references.

Single Page Application Frontend
Backend for SPA and Mobile App (Managed Services)
Backend for SPA and Mobile App (Serverless)
Multi Page Application (Integrated Web)
Single Page Application with Worker Process

How it works

In cloud computing, worker processes can be implemented using messaging or queues. The main process adds tasks to a queue, and worker processes monitor the queue for new tasks. When a worker process finds a new task in the queue, it takes the task and performs the work. This ensures that tasks are performed in the order in which they were received and that multiple worker processes can operate in parallel to handle a large number of tasks.

Examples of messaging systems used for worker process implementation in the cloud include RabbitMQ, Apache Kafka, and Amazon Simple Queue Service (SQS). The messaging system provides a way for the main process to communicate with the worker processes, allowing tasks to be dispatched and results to be returned.

Components

The main components of a worker process can include:

  • Task queue: A task queue is used to store the tasks that need to be performed. The worker processes monitor the queue for new tasks.

  • Task dispatcher: The task dispatcher is responsible for allocating tasks to the worker processes. It ensures that tasks are dispatched to the worker processes in a fair and efficient manner.

  • Worker processes: The worker processes are the actual processes that perform the tasks. They receive tasks from the task queue, perform the work, and return the results.

  • Result storage: A result storage mechanism is used to store the results of the tasks performed by the worker processes. This can be a database, a file system, or another type of data storage.

  • Error handling: A worker process should have error handling mechanisms in place to deal with unexpected errors that may occur during task processing. This can include retrying failed tasks or logging the error for later review.

  • Monitoring and reporting: A monitoring and reporting system is used to monitor the performance of the worker processes and provide feedback to the main process. This can include information about the number of tasks processed, the processing time for each task, and any errors that may have occurred.

Architecture

There are multiple architecture patterns to implement a Worker Process, and those are totally dependant on use cases.

Use Case 1

An example use case can be sending newsletters to all subscribed users at 10 AM.

  • Application has to execute some long running jobs on specific intervals.
  • Application should not use any compute resources if it’s not executing any jobs
  • Jobs should be triggered based some external or internal events or from schedulers

In this use case we can utilize AWS EventBridge Scheduler to trigger the jobs on specific intervals. AWS ECS Tasks can be used to execute the jobs, and once the job is done these tasks will end the execution and de-allocate the compute resources. We don’t require any Messaging/Queue services in this case(will cover that in the next use case).

The following diagram explains the architecture of the worker process along with other important components.

Single Page Application with Worker Process (Use Case 1)

Use Case 2

An example use case can be sending registration welcome messages to users after their successful registration on the platform

  • After completing the registration process, a message object with userId will be pushed to a Queue(eg: SQS).
  • A Lambda configured as an SQS trigger will be triggered and send the email to the user.
  • Update database if required.
  • As lambda is a serverless component, the costing will be calculated based on number of executions and time.

The following diagram explains the architecture of the worker process along with other important components.

Single Page Application with Worker Process (Use Case 2)

Previous – Multi Page Application (Integrated Web)

Home

Multi Page Application

Introduction

A Multi-Page Application (MPA) is a traditional web application that consists of multiple separate pages, with each page being loaded in full when a user navigates to it. MPAs are typically built using server-side technologies such as PHP, Ruby on Rails, or .NET, and rely on server-side rendering to generate HTML that is sent to the browser.

Each page in an MPA is a self-contained unit that operates independently of the other pages, and navigation between pages involves making a full round trip to the server. This can result in slower page load times, as the entire page must be reloaded every time the user navigates to a new page.

This article constitutes a component of a comprehensive collection of web architecture pattern references.

Single Page Application Frontend
Backend for SPA and Mobile App (Managed Services)
Backend for SPA and Mobile App (Serverless)
Multi Page Application (Integrated Web)
Single Page Application with Worker Process

How it works

Multi-page application (MPA) is a type of web application that consists of multiple pages served from a server to a client’s web browser, where each page is a distinct URL. These pages are generated dynamically on the server-side and then rendered on the client-side, with each page request triggering a round-trip to the server.

When a user requests a page, the server responds with the HTML, CSS, and JavaScript necessary to render the page, which is then displayed in the user’s web browser. Navigation between pages is achieved by the user clicking on links or by JavaScript code dynamically updating the URL and triggering a new page request to the server.

Components

The main components of a Multi-page application (MPA) are:

  • Server: A server that runs a server-side language, such as PHP, Node.js, or Ruby, and is responsible for handling HTTP requests and generating dynamic content.

  • Client: A web browser that displays the dynamic content generated by the server, such as HTML, CSS, and JavaScript.

  • Router: A component that handles navigation between pages and updates the URL to reflect the current page.

  • HTML, CSS, and JavaScript: The technologies used to create the user interface and dynamic behavior of the pages.

  • Database: A database that stores the application’s data, such as user information, product catalog, and order history. The server accesses this data to generate dynamic content for each page.

  • APIs: Interfaces that allow the server and client to exchange data, such as REST APIs that enable the client to retrieve data from the server and send data to the server.

These components work together to provide the user with a seamless experience as they navigate the application, with each page request triggering a round-trip to the server and a rendering of the updated content in the client’s web browser.

Architecture

All the above components are served from a single application

Backend

There are different solutions and services to host the MPA Backend on cloud, here are some of the popular:

  • VM (eg: EC2)
  • Container Services (eg: ECS)
  • Managed Services (eg: Elastic Beanstalk)
  • Serverless (eg: Lambda)
  • Kubernetes (eg: EKS)

Among the above list, we’re covering only Managed Services.

Managed Services

Using managed services like Amazon Web Services (AWS) Elastic Beanstalk and Microsoft Azure App Service provides several benefits, including:

  • Simplified deployment and management: Managed services take care of infrastructure management and deployment, allowing developers to focus on writing code and delivering features.

  • Automated scaling: Managed services automatically scale application instances based on demand, providing high availability and performance without manual intervention.

  • Reduced operational overhead: By outsourcing infrastructure management, organizations can reduce operational overhead and focus on delivering value to their customers.

  • Improved security: Managed services provide built-in security features, such as automatic patching and secure data storage, reducing the security burden on organizations.

  • Easy integration with other services: Managed services integrate seamlessly with other services, allowing organizations to leverage the full suite of cloud services to build and deploy their applications.

  • Cost-effectiveness: Managed services offer a cost-effective solution for deploying and managing applications, with flexible pricing models and the ability to pay for only the resources used.

  • Global availability: Managed services are designed to be globally available, providing fast and reliable access to applications from anywhere in the world.

MPA Architecture (Managed Services)

Next – Single Page Application with Worker Process

Previous – Backend for SPA and Mobile App (Serverless)

Home

Backend for SPA and Mobile App (Serverless)

Introduction

Single Page Applications (SPAs) and mobile apps can use APIs to communicate with a server and retrieve or manipulate data. This can be used to display data on the frontend, or to send data to the server to be stored.

This article constitutes a component of a comprehensive collection of web architecture pattern references.

Single Page Application Frontend
Backend for SPA and Mobile App (Managed Services)
Backend for SPA and Mobile App (Serverless)
Multi Page Application (Integrated Web)
Single Page Application with Worker Process

API Technologies

For the backend of a mobile app or SPA, various API technologies can be used, including:

  • REST (Representational State Transfer): This is a popular, lightweight, and scalable API technology that uses HTTP requests to perform operations on resources.

  • GraphQL: This is a newer API technology that provides a more flexible and efficient alternative to REST, allowing clients to request only the data they need.

  • gRPC: This is a high-performance, open-source framework for building scalable APIs. It uses the Protocol Buffers data format and supports a variety of programming languages.

  • SOAP (Simple Object Access Protocol): This is an XML-based protocol for exchanging structured information in the implementation of web services.

These API technologies can be used to create APIs that can be consumed by mobile apps or SPAs. The choice of technology depends on the specific requirements of the project, such as performance, security, and data exchange requirements.

Architecture

The architecture for all the above API technologies will be almost the same. We are considering the REST API for now as that is the most popular option.

There are different solutions and services to host the SPA Backend on cloud, here are some of the popular:

  • VM (eg: EC2)
  • Container Services (eg: ECS)
  • Managed Services (eg: Elastic Beanstalk)
  • Serverless (eg: Lambda)
  • Kubernetes (eg: EKS)

Among the above list, we’re covering only Serverless and Managed Services.

Serverless

Using a serverless architecture has several benefits, including:

  • Cost-effectiveness: Serverless architectures allow you to pay only for the resources you actually use, making it a cost-effective solution for both small and large scale applications.

  • Scalability: Serverless architectures automatically scale to meet changing demand, eliminating the need for manual scaling and allowing organizations to focus on delivering value to their customers.

  • Reduced operational overhead: By outsourcing infrastructure management to the cloud provider, organizations can reduce operational overhead and focus on writing code and delivering features.

  • Improved time-to-market: Serverless architectures allow developers to focus on writing code, rather than managing infrastructure, improving time-to-market for new features and applications.

  • Flexibility: Serverless architectures provide a flexible and modular approach to building applications, allowing organizations to easily modify and extend their applications as needed.

  • High availability: Serverless architectures are designed to be highly available, with automatic failover and replication to ensure that applications remain available even in the event of a failure.

  • Improved security: Serverless architectures provide built-in security features, such as automatic patching and secure data storage, reducing the security burden on organizations.

  • Easy integration with other services: Serverless architectures integrate seamlessly with other cloud services, allowing organizations to leverage the full suite of cloud services to build and deploy their applications.

Backend Architecture (Serverless)

A complete architecture diagram with frontend and backend would be as follows:

SPA Architecture (Serverless)

Next – Multi Page Application (Integrated Web)

Previous – Backend for SPA and Mobile App (Managed Services)

Home

Architecture References for Web Apps

Introduction

This document provides a comprehensive guide to common and standard architecture patterns used in modern software development. The purpose of this document is to serve as a reference for developers and tech leads in IT industry, and to provide a common understanding of the different architecture patterns that can be used to design, build, and deploy scalable and reliable applications.

The document includes 5 to 6 different types of architecture patterns that have proven to be effective and efficient for a variety of use cases. Each architecture pattern includes a high-level overview, common use cases, and considerations for implementation, making it easier for you to understand the strengths and weaknesses of each pattern and choose the right one for your specific needs.

Whether you are designing a new application, refactoring an existing one, or simply looking to improve your understanding of architecture patterns, this document is an essential resource. By providing a comprehensive overview of the most common and effective architecture patterns, it helps you make informed decisions and ensures that your applications are built to meet the demands of modern users and the ever-evolving technology landscape.

Architecture Patterns

Here are the initial few architecture patterns we have shortlisted. We plan to add more patterns in the future.

Cloud Services

The architecture diagrams are created utilizing AWS components, however, alternative services for Azure and GCP are also mentioned here.

Service Type AWS Azure GCP
Virtual Machine EC2 Azure VM Compute Engine
Serverless (FaaS) Lambda Azure Functions Cloud Functions
Queue SQS Azure Service Bus Cloud Tasks
Job scheduling EventBridge Azure Scheduler Cloud Scheduler
Container Services ECS ACI Cloud Run
PaaS (Managed Service) Elastic Beanstalk App Service App Engine
RDBMS RDS Azure Database Cloud SQL
NoSQL DynamoDb CosmosDb Firestore
Load Balancing ELB ALB Cloud Load Balancing
API Management API Gateway API Management API Gateway
CDN CloudFront Azure Content Delivery Network Cloud CDN
Static Storage S3 Azure Blob Storage Cloud Storage

Next – Single Page Application Frontend

Home

AWS Serverless and DynamoDb Single Table Design using .Net 6 – Part 1

Introduction

When developing a high performance scalable application everybody tends to use the below technologies.

  • Serverless Functions or Lambdas
  • Cloud managed NoSQL databases like DynamoDb or CosmosDb
  • Database design strategies like Single Table Design

In this article we’ll cover about Single Table Design. Next part we’ll create a Serverless application using .Net 6 and DynamoDb.

Use Case

Recently we worked on a Social Networking platform and we used Single Table Design in that project. That use case is very complex and overwhelming for a beginner, so let’s consider an imaginary use case (this may not exactly fit the Single Table Design). But let’s consider an Employee REST API which will help us to design a basic Single Table Design. Here are the features of the API:

  • User will be able to add a Employee
  • User will be able to fetch the Employee details with the EmployeeCode
  • User will be able to fetch the immediate Reportees (for the sake of simplicity) of the Employee/Manager

Schema

EmployeeCodeEmailIdFirstNameLastNameReportingManagerCode

In the RDBMS world EmployeeCode will be the Primary Key and ReportingManagerCode will be the foreign key pointing to the same table using self join.

Single Table Design

In RDBMS, we use multiple tables in a database, and that tables may be interrelated with foreign keys and we tend to normalize the tables up to a certain level and avoid duplicate storage as far as we can. In the NoSQL world(especially in DynamoDb), there are no foreign keys and joins(and there is a reason for that), and do not care about duplicacy. In Single Table Design, we put all the entities(eg: Post, User, Comment, Follower etc.) in a single table and may use the ‘Type’ attribute to identify the entities.

Why

In a Read Heavy database, many(millions of) users will be accessing the different content at the same time. So you have to fetch the data as fast as possible. If you want to return the data quickly you have to minimize the database requests for a single API call. In RDBMS even though we’re making a single call most of the queries will have complex joins and involve multiple tables, as the data size increases these queries take more time.

If you have to fetch the Posts of all the users who I’m following, then in SQL-based databases you have to join ‘Users’, ‘Followers’, ‘Posts’, ‘Comments’ etc. If you store the entities in separate DynamoDb tables then you’ve to make multiple calls from your backend to DynamoDb and do some JSON manipulations and return that to the Frontend. We cannot afford that many Db calls from the backend, so we need to get all the data in a single Db request.

How

In DynamoDb within each table, you must have a partition key, which is a string, numeric, or binary value. This key is a hash value used to locate items in constant time regardless of table size. It is conceptually different to an ID or primary key field in a SQL-based database and does not relate to data in other tables. When there is only a partition key, these values must be unique across items in a table.

Each table can optionally have a sort key. This allows you to search and sort within items that match a given primary key. While you must search on exact single values in the partition key, you can pattern search on sort keys. It’s common to use a numeric sort key with timestamps to find items within a date range, or use string search operators to find data in hierarchical relationships.

With only partition keys and sort keys, this limits the possible types of query without duplicating data in a table(even though there is no harm in duplicating the data as storage cost is very less, but modifying the multiple copies is another headache). To solve this issue, DynamoDB also offers two types of indexes ie: Local secondary indexes (LSIs) and Global secondary indexes (GSIs). We can discuss these topics in a separate session.

Single Table Design is not ‘Agile’, you have to identify the Data Access Patterns in the beginning of the project, otherwise you may identify a use case later which may require an entire redesign of the data structure. So let’s identify our access patterns first.

Data Access Patterns

Our patterns are the GET API responses we discussed earlier. In our case it’s simple as of now.

  • User will be able to fetch the Employee details with the EmployeeCode
  • User will be able to fetch the immediate Reportees of the Employee/Manager
User will be able to add an Employees

Let’s create the table as follows. Partition Key is EmployeeCode, and Sort Key is ReportingManagerCode

Table: 1
Table: 1

Now things look simple, we can get the entity based on pk, let’s evaluate the next access pattern and come back here if required.

User will be able to fetch the immediate Reportees of the Employee/Manager

Suppose we need to fetch all the direct reportees of user 11. We can see if we can query using the sort key then we could have fetched all the reportees of 11 in a single query, but here your challenges start. You cannot query a DynamoDb table without providing a pk equals statement. So if you query pk=11 then you’ll get only one record.

Now the next step is to evaluate whether we can use LSI or GSIs to solve the problem or not. LSI is just another sort key and in query we need to provide pk then LSI won’t work here. If we use GSI then you can create one more pk and sk, but then you can’t use the main pk or sk.

Next option is to duplicate the data, let’s think about that. In the above table structure one record was self-sufficient(it had both EmployeeCode and ReportingManagerCode), but in the below format.we separated the entity. We also added a type attribute to identify the type of entity.

Table: 2
Table: 2

In user entities both pk and sk are the same (ie EmployeeCode). We duplicated the same entities and swapped the pk and sk and assigned the type as ‘reportee’. Now let’s evaluate the query. If we ran a query as pk=11 we will get three records

Table: 3
Table: 3

One record is for the Manager and multiple records for the reportees, we can filter out the user type by filter expressions if required. So the second access pattern has been solved, but we’ve a problem in the first access pattern now. Our first query was pk=11, but now that will return 3 records so we need to fix that. We can use pk=11 and sk=11. Solved!

Conclusions

The use case we discussed here was very basic one, but still we had to take care of multiple things and we also had to duplicate the data. Duplicating the data will further complicate things like updation and deletions etc. You may need to implement Message Queues like SQS and BackgroundService to solve that issue.

In the next article, we will cover the actual practical implementation with code samples.

Generic Message Queue implementation using AWS SQS and .Net 6 BackgroundService

Requirement

Most of the Enterprise projects we develop, we have to implement some cross-cutting concerns like Audit Logs. The important factor is that application performance should not impact because of these Audit Logs (or similar cross-cutting concerns). So how to implement this without compromising the performance?

BackgroundService

Background tasks and scheduled jobs are something you might need to use in any application, whether or not it follows the microservices architecture pattern. The difference when using a microservices architecture is that you can implement the background task in a separate process/container for hosting so you can scale it down/up based on your need.

From a generic point of view, in .NET we called these type of tasks Hosted Services, because they are services/logic that you host within your host/application/microservice. Note that in this case, the hosted service simply means a class with the background task logic.

In our sample we will be configuring the BackgroundService in the same Web API project for the sake of simplicity, but in the real production scenario you should consider a separate service. So BackgroundService can offload the Audit Log writing mechanism from the Web API. Now the challenge is how should we send the Audit Log objects to BackgroundService?

Message Queue

Message queues allow different parts of a system to communicate and process operations asynchronously. A message queue provides a lightweight buffer which temporarily stores messages, and endpoints that allow software components to connect to the queue in order to send and receive messages. The messages are usually small, and can be things like requests, replies, error messages, or just plain information. To send a message, a component called a producer adds a message to the queue. The message is stored on the queue until another component called a consumer retrieves the message and does something with it.

There are different implementations of Message Queues exists, multiple cloud providers provide their own implementations. Here we’re using AWS SQS

Steps

Create an ASP.NET Core Web API

Install Dependencies

Create a folder Services -> Contracts and create a Generic Interface called IMessageService as follows:

public interface IMessageService<T>{    Task DeleteMessageAsync(string id);    Task<Dictionary<string, T?>> ReceiveMessageAsync(int maxMessages = 1);    Task SendMessage(T message);}

Now let’s create the SQS Message Service by implementing the above Interface

Let’s first create the Constructor and this will create AWS SQS Client. Before doing that we need to create an IAM user with required permissions.

Create IAM user

  • Go to the AWS console and search IAM.
  • Click on the Users panel on the left.
  • Click on the Add User button.
  • Provide a user name and select the Access key - Programmatic access checkbox and click Next: Permissions button.
  • Click on Attach existing policies directly tab
  • Search AmazonSQSFullAccess and select that policy. We would require FullAccess because we will be creating the Queue programmatically if that Queue does not exist.
  • Click on Next button twice and finally click on Create User button.
  • Copy the Access key ID and Secret key ID and store in a safe place.

Configure the AWS Credentials

There are multiple ways to configure the AWS credentials. I used AWS CLI, which can be downloaded from here. Once it’s downloaded and installed in your machine follow the below command in Terminal or Command Prompt.

aws configure

The above command will access the following details from user and store in the ~/.aws/config file. AWS SDK will fetch these credentials and create the clients.

AWS Access Key IDAWS Secret Access KeyDefault region name

Constructor code will look like the below:

public SqsGenericService(ILogger<SqsGenericService<T>> logger, IConfiguration configuration, IHostingEnvironment env){    _logger = logger;    var options = c;    //This queueName will be used to create the SQS Queue for each type of object in different environments    var queueName = $"que-{env.EnvironmentName.ToLower()}-{typeof(T).Name.ToLower()}";    _amazonSQSClient = options.CreateServiceClient<IAmazonSQS>();    _queueUrl = GetQueueUrl(queueName).Result;}

Most of  the code is self explanatory here. The configuration.GetAWSOptions() will fetch the AWS configurations.

Dynamic Queue creation for each environments and entities

queueName variable will be created concatenating the environment name and the name of the generic entity. GetQueueUrl() method will fetch the queue url if it already exists or else it will create a queue.

The next method is SendMessage. This method will accept a Generic message object and use AWS SQS Client to push the serialized object to the Queue.

public async Task SendMessage(T message){    var messageBody = JsonConvert.SerializeObject(message);    await _amazonSQSClient.SendMessageAsync(new SendMessageRequest    {       QueueUrl = _queueUrl,       MessageBody = messageBody    });    _logger.LogInformation("Message {message} send successfully to {_queueUrl}.", message, _queueUrl);}

The next method is ReceiveMessageAsync, this method will fetch the messages from queue and convert that to a Dictionary of MessageReceiptHandle and MessageBody as the consumer of this service would require RecieptHandle to delete the message after processing.

Worker Process

To implement the worker process, as mentioned earlier, decided to use BackgroundService. Here is the complete code for the AuditLogWorker class.

 public class AuditLogWorker : BackgroundService {     private readonly ILogger<AuditLogWorker> _logger;     private readonly IMessageService<AuditLogModel> _messageClient;     public AuditLogWorker(ILogger<AuditLogWorker> logger, IMessageService<AuditLogModel> messageClient)     {         _logger = logger;         _messageClient = messageClient;     }     protected override async Task ExecuteAsync(CancellationToken stoppingToken)     {         while (!stoppingToken.IsCancellationRequested)         {             _logger.LogInformation("AuditLogWorker running at: {Time}", DateTime.Now);             var messages = await _messageClient.ReceiveMessageAsync();             foreach (var message in messages)             {                 //You can write your custom logic here...                 _logger.LogInformation("AuditLogWorker processed message {userID}, {Action}", message.Value?.UserId, message.Value?.Message);                 await _messageClient.DeleteMessageAsync(message.Key);             }         await Task.Delay(5000, stoppingToken); //Delay can be set according to your business requirement.         }     }}

Modify the Program.cs to add the necessary dependencies and configure the hosted service as below:

builder.Services.AddSingleton<IMessageService<AuditLogModel>, SqsGenericService<AuditLogModel>>();builder.Services.AddHostedService<AuditLogWorker>();

Create a simple REST API method to accept an object and push that item to the queue and test it. The testing method will look like the below:

 [HttpPost] public async Task Post([FromBody] AuditLogModel model) {     await _messageClient.SendMessage(model);     _logger.LogInformation("Message pushed to the queue successfully."); }

That’s it folks, I hope everybody enjoyed the blog. The entire code can be downloaded from the GitHub repo

Happy coding!!!