Uploaded by Humayun !

Mehmet Microservices Architecture

advertisement
Sign up
Open in app
Search
Write
Microservices Architecture on .NET
with applying CQRS, Clean
Architecture and Event-Driven
Communication
Mehmet Ozkaya · Follow
Published in aspnetrun · 12 min read · May 8, 2020
251
5
Sign in
Building Microservices on .Net platforms which used Asp.Net Web API, Docker,
RabbitMQ, gRPC, Ocelot API Gateway, MongoDB, Redis, PostgreSQL, SqlServer,
Entity Framework Core, Dapper, CQRS and Clean Architecture implementation.
Introduction
In this article we will show how to build microservices on .NET
environments with using ASP.NET Core Web API applications, Docker for
containerize and orchestrator, Microservices communications with gRPC
and RabbitMQ and using API Gateways with Ocelot API Gateway, and using
different databases platforms NoSQL(MongoDB, Redis) and Relational
databases(PostgreSQL, SqlServer) and using Dapper, Entity Framework
Core for ORM Tools, and using best practices CQRS with Clean Architecture
implementation.
Look at the big picture of final architecture of the system.
There is a couple of microservices which implemented e-commerce
modules over Catalog, Basket, Discount and Ordering microservices with
NoSQL (MongoDB, Redis) and Relational databases (PostgreSQL, Sql Server)
with communicating over gRPC and RabbitMQ Event Driven
Communication and using Ocelot API Gateway.
Get Udemy Course with discounted — Microservices Architecture and
Implementation on .NET.
Along with this implemented below features over the run-aspnetcoremicroservices repository;
Catalog microservice which includes;
• ASP.NET Core Web API application
• REST API principles, CRUD operations
• MongoDB NoSQL database connection on docker containerization
• N-Layer implementation with Repository Pattern
• Swagger Open API implementation
• Dockerfile and docker-compose implementation
Basket microservice which includes;
• ASP.NET Core Web API application
• REST API principles, CRUD operations
• Redis database connection on docker containerization
• Consume Discount Grpc Service for inter-service sync communication to
calculate product final price
• Publish BasketCheckout Queue with using MassTransit and RabbitMQ
• Swagger Open API implementation
• Dockerfile and docker-compose implementation
Discount microservice which includes;
• ASP.NET Grpc Server application
• Build a Highly Performant inter-service gRPC Communication with
Basket Microservice
• Exposing Grpc Services with creating Protobuf messages
• Using Dapper for micro-orm implementation to simplify data access and
ensure high performance
• PostgreSQL database connection and containerization
• Dockerfile and docker-compose implementation
Microservices Communication
• Sync inter-service gRPC Communication
• Async Microservices Communication with RabbitMQ Message-Broker
Service
• Using RabbitMQ Publish/Subscribe Topic Exchange Model
• Using MassTransit for abstraction over RabbitMQ Message-Broker
system
• Publishing BasketCheckout event queue from Basket microservices and
Subscribing this event from Ordering microservices
• Create RabbitMQ EventBus.Messages Common Class Library and add
references Microservices
Ordering microservice which includes;
• ASP.NET Core Web API application
• Implementing DDD, CQRS and Clean Architecture with using Best
Practices
• Developing CQRS with using MediatR, FluentValidation and AutoMapper
nuget packages
• Consuming RabbitMQ BasketCheckout event queue with using
MassTransit-RabbitMQ Configuration
• SqlServer database connection and containerization
• Using Entity Framework Core ORM and auto migrate to SqlServer when
application Startup
• Swagger Open API implementation
• Dockerfile and docker-compose implementation
API Gateway Ocelot microservice which includes;
• Implement API Gateways with Ocelot
• Sample microservices/containers to reroute through the API Gateways
• Run multiple different API Gateway/BFF container types
• The Gateway aggregation pattern in Shopping.Aggregator
• Dockerfile and docker-compose implementation
WebUI ShoppingApp microservice which includes;
• ASP.NET Core Web Application with Bootstrap 4 and Razor template
• Call Ocelot APIs with HttpClientFactory and Polly
• Aspnet core razor tools — View Components, partial Views, Tag Helpers,
Model Bindings and Validations, Razor Sections etc..
• Dockerfile and docker-compose implementation
Microservices Cross-Cutting Implementations
• Implementing Centralized Distributed Logging with Elastic Stack (ELK);
Elasticsearch, Logstash, Kibana and SeriLog for Microservices
• Use the HealthChecks feature in back-end ASP.NET microservices
• Using Watchdog in separate service that can watch health and load
across services, and report health about the microservices by querying
with the HealthChecks
Microservices Resilience Implementations
• Making Microservices more resilient Use IHttpClientFactory to
implement resilient HTTP requests
• Implement Retry and Circuit Breaker patterns with exponential backoff
with IHttpClientFactory and Polly policies
Ancillary Containers
• Use Portainer for Container lightweight management UI which allows
you to easily manage your different Docker environments
• pgAdmin PostgreSQL Tools feature rich Open Source administration and
development platform for PostgreSQL
Docker Compose establishment with all microservices on docker;
• Containerization of microservices
• Containerization of databases
• Override Environment variables
Udemy Course — Microservices Architecture and Step by Step
Implementation on .NET
Get Udemy Course with discounted — Microservices Architecture and
Implementation on .NET.
As of now, you’ll finally be able to enroll in the course — where you’ll learn to
using .net core in microservices world, understand how can used Asp.Net
Web API, Docker, RabbitMQ, gRPC, Ocelot API Gateway, MongoDB, Redis,
PostgreSQL, SqlServer, Entity Framework Core, CQRS and Clean
Architecture implementation, and develop real-world applications that
make a difference!
Source Code
Get the Source Code from AspnetRun Microservices Github — Clone or fork
this repository, if you like don’t forget the star. If you find or ask anything you
can directly open issue on repository.
Prerequisites
• Install the .NET 5 or above SDK
• Install Visual Studio 2019 v16.x or above
• Docker Desktop — Memory: 4 GB
Run Application
Follow these steps to get your development environment set up: (Before Run
Start the Docker Desktop)
1. Clone the repository
2. At the root directory which include docker-compose.yml files, run below
command:
docker-compose -f docker-compose.yml -f docker-compose.override.yml
up –d
3. Wait for docker compose all microservices. When you run this command
for the first time, it can take more than 10 minutes to prepare all docker
containers.
4. You can launch microservices as below urls:
• Catalog API -> http://host.docker.internal:8000/swagger/index.html
• Basket API -> http://host.docker.internal:8001/swagger/index.html
• Discount API -> http://host.docker.internal:8002/swagger/index.html
• Ordering API -> http://host.docker.internal:8004/swagger/index.html
• Shopping.Aggregator -> http://host.docker.internal:8005/swagger/
index.html
• API Gateway -> http://host.docker.internal:8010/Catalog
• Rabbit Management Dashboard -> http://host.docker.internal:15672 —
guest/guest
• Portainer -> http://host.docker.internal:9000 — admin/admin1234
• pgAdmin PostgreSQL -> http://host.docker.internal:5050 —
admin@aspnetrun.com/admin1234
• Elasticsearch -> http://host.docker.internal:9200
• Kibana -> http://host.docker.internal:5601
• Web Status -> http://host.docker.internal:8007
• Web UI -> http://host.docker.internal:8006
Launch http://host.docker.internal:8007 in your browser to view the Web
Status. Make sure that every microservices are healthy.
Launch http://host.docker.internal:8006 in your browser to view the Web UI.
You can use Web project in order to call microservices over API Gateway.
When you checkout the basket you can follow queue record on RabbitMQ
dashboard.
According to this design, we have a web application which basically
implement e-commerce domain. This application has below functionalities;
• Retrieving Products and Categories and listing, filtering them
• Add products to Basket with applying quantity, color and calculating total
basket price
• Check out Basket and Create Order with submitting the basket
So you can perform E2E test with following above use cases.
What are Microservices ?
Microservice are small business services that can work together and can be
deployed autonomously / independently. These services communicate with
each other by talking over the network and bring many advantages with
them.
One of the biggest advantages is that they can be deployed independently.
However, it offers the opportunity to work with many different technologies
(technology agnostic).
Microservices Characteristics
Microservices are small, independent, and loosely coupled. A single small
team of developers can write and maintain a service. Each service is a
separate codebase, which can be managed by a small development team.
Services can be deployed independently. A team can update an existing
service without rebuilding and redeploying the entire application.
Services are responsible for persisting their own data or external state. This
differs from the traditional model, where a separate data layer handles data
persistence.
Services communicate with each other by using well-defined APIs. Internal
implementation details of each service are hidden from other services.
Services don’t need to share the same technology stack, libraries, or
frameworks.
Monolithic Architecture Pros-Cons
A monolithic application has a single codebase that contains multiple
modules. Modules are divided according to their functional or technical
characteristics. It has a single build system that builds the entire application.
It also has a single executable or deployable file.
Strengths of Monolithic Architecture
Easier debugging and end-to-end testing: Unlike the Microservice
architecture, monolithic applications are much easier to debug and test.
It is easy to develop and maintain for small projects. Application can be
developed quickly. Easy to perform test operations, you can perform end-toend tests much faster.
Easy deployment: An advantage associated with monolithic applications
being a single piece is easy deployment. It is much easier to deploy a single
part than to deploy dozens of services.
Also Transaction management is easy.
Weaknesses of Monolithic Architecture
Complexity: When a monolithic application grows, it becomes too complex
to understand. As the application grows, it becomes difficult to develop new
features and maintain existing code.
With the increase in the number of teams and employees working on the
project, development and maintenance becomes more difficult. Because of
their dependencies on each other, changes made in one functionality can
affect other places.
The challenge of making changes: Making changes is very cumbersome in
such a large and complex application. Any code change affects the entire
system, so all parts must be thoroughly checked. This can make the overall
development process much longer. Even with a small change in the
application, the entire application must be deploy.
Inability to apply new technology: Implementing a new technology in a
monolithic application is extremely problematic, The same programming
language and same frameworks must be used.
integrating the application into a new technology means redeveloping the
whole application.
Low Saleable: You cannot scale components independently. You have to
scale the entire application.
Microservices Architecture Pros-Cons
Microservice are small business services that can work together and can be
deployed autonomously / independently.
Strengths of Microservice Architecture
Independent Services: Each service can be deployed and updated
independently, providing more flexibility. An error in a microservic, only
has an effect on a specific service and does not affect the entire application.
Also, adding new features to a microservice application is easier than a
monolithic one.
Whether the application is very large or very small, adding new features and
maintaining existing code is easy. Because it is sufficient to make changes
only within the relevant service.
Better scalability: Each service can be scaled independently.
Therefore, the entire application does not need to be scaled. This saves a lot
of time and money. Additionally, every monolithic application has limits in
terms of scalability. However, multiplexing a service with traffic on the
microservice is less inconvenient and able to handle the whole load.
This is why most projects that appeal to a large user base, have begun to
adopt the microservice architecture.
Technology Diversity: Teams do not have to completely choose the
technologies on which the services will be developed,
they can choose the appropriate technology for the services they will
develop over time. For example, a service can be developed with the python
programming language in order to use “machine learning” features next to
microservices developed on .Net. The desired technology or database can be
used for each microservice.
Higher level of agility: Any errors in a microservice application only affect a
particular service, not the entire application. Therefore, all changes and tests
are carried out with lower risks and fewer errors.
Teams can work more efficiently and quickly. Folks who are just starting the
project can easily adapt without getting lost in the code base.
Intelligibility: A microservice application broken down into smaller and
simpler components is easier to understand and manage.
Since each service is independent from each other and only has its own
business logic, the code base of the service will be quite simple. It is easier to
understand and manage.
Weaknesses of Microservice Architecture
Microservice architecture should never be preferred for small-scale
applications.
Extra extra complexity : Since a microservice architecture is a distributed
system, you need to configure each module and databases separately. In
addition, as long as such an application includes independent services, they
must all be distributed independently.
System distribution A microservice architecture is a complex system made
up of multiple services and databases, so all connections need to be handled
carefully. Deployment requires a separate process for each service.
The challenge of management and traceability: You will need to deal with
multiple situations when creating a microservice application. It is necessary
to be able to monitor external configuration, metrics, health-check
mechanisms and environments where microservices can be managed.
The challenge of testing: The large number of services deployed
independently of each other makes the testing process extremely difficult.
Since there are more than one service and more than one database,
transaction management will be difficult.
Monolithic vs Microservices Architecture Comparison
You can find monolithic architecture on left side, and on the right side is
microservices architecture.
Monolithic application has a single codebase that contains multiple
modules. Modules are divided according to their functional or technical
characteristics. It has a single build system that builds the entire application.
It also has a single executable or deployable file.
Microservices are small, independent, and loosely coupled.
A single small team of developers can write and maintain a service.
Each service is a separate codebase, which can be managed by a small
development team. Services can be deployed independently. A team can
update an existing service without rebuilding and redeploying the entire
application.
Services are responsible for persisting their own data or external state.
This differs from the traditional model, where a separate data layer handles
data persistence. Services communicate with each other by using welldefined APIs. Internal implementation details of each service are hidden
from other services. Services don’t need to share the same technology stack,
libraries, or frameworks.
Deployment Comparison
A monolithic applications has large development organizations.
Single code base created communication overhead.
• The path from code commit to production is arduous.
• Changes sit in a queue until they can be manually tested.
• This architecture created -> Large complex unreliable difficult to
maintain.
Microservices are;
• Small autonomous loosely coupled teams
• Each service has its own source code repository
• Each service has its own automated deployment pipelines.
This architecture created -> Small simple reliable maintained services.
What is Docker and Container ?
Docker is an open platform for developing, shipping, and running
applications. Docker enables you to separate your applications from your
infrastructure so you can deliver software quickly.
Advantages of Docker’s methodologies for shipping, testing, and deploying
code quickly, you can significantly reduce the delay between writing code
and running it in production. Docker provides for automating the
deployment of applications as portable, self-sufficient containers that can
run on the cloud or on-premises. Docker containers can run anywhere, in
your local computer to the cloud. Docker image containers can run natively
on Linux and Windows.
Docker Container
A container is a standard unit of software that packages up code and all its
dependencies so the application runs quickly and reliably from one
computing environment to another. A Docker container image is a
lightweight, standalone, executable package of software that includes
everything needed to run an application.
Docker containers, images, and registries
When using Docker, a developer develops an application and packages it
with its dependencies into a container image. An image is a static
representation of the application with its configuration and dependencies.
In order to run the application, the application’s image is instantiated to
create a container, which will be running on the Docker host.
Containers can be tested in a development local machines.
As you can see the images above, how docker components related each
other.
Developer creates container in local and push the images the Docker
Registry.
Or its possible that developer download existing image from registry and
create container from image in local environment.
Developers should store images in a registry, which is a library of images and
is needed when deploying to production orchestrators. Docker images are
stores a public registry via Docker Hub; other vendors provide registries for
different collections of images, including Azure Container Registry.
Alternatively, enterprises can have a private registry on-premises for their
own Docker images.
Get Udemy Course with discounted — Microservices Architecture and
Implementation on .NET.
Get the Source Code from AspnetRun Microservices Github
Follow Series of Microservices Articles
This is the introduction of the series. This will be the series of articles. You
can follow the series with below links.
• 0- Microservices Architecture on .NET with applying CQRS, Clean
Architecture and Event-Driven Communication
• 1- Microservice Using ASP.NET Core, MongoDB and Docker Container
• 2- Using Redis with ASP.NET Core, and Docker Container for Basket
Microservices
• 3- Using PostgreSQL and Dapper with ASP.NET and Docker Container for
Discount Microservices
• 4- Building Ocelot API Gateway Microservices with ASP.NET Core and
Docker Container
• 5- Microservices Event Driven Architecture with RabbitMQ and Docker
Container on .NET
• 6- CQRS and Event Sourcing in Event Driven Architecture of Ordering
Microservices
• 7- Microservices Cross-Cutting Concerns with Distributed Logging and
Microservices Resilience
• 8- Securing Microservices with IdentityServer4 with OAuth2 and
OpenID Connect fronted by Ocelot API Gateway
• 9- Using gRPC in Microservices for Building a high-performance
Interservice Communication with .Net 5
• 10-Deploying .Net Microservices to Azure Kubernetes Services(AKS) and
Automating with Azure DevOps
Microservices
Aspnetcore
Api Gateway
Written by Mehmet Ozkaya
Rabbitmq
Mongodb
Follow
Search
Write
Microservices Using ASP.NET Core,
MongoDB and Docker Container
Mehmet Ozkaya · Follow
Published in aspnetrun · 13 min read · May 13, 2020
191
3
Building Catalog Microservice on .Net platforms which used Asp.Net Web API,
Docker, MongoDB and Swagger. Test microservice with using Postman.
Introduction
In this article we will show how to perform Catalog Microservices
operations on ASP.NET Core Web application using MongoDB, Docker
Container and Swagger.
By the end of the article, we will have a Web API which implemented CRUD
operations over Product and Category documents on MongoDB.
Look at the final swagger application.
Developing Catalog microservice which includes;
• ASP.NET Core Web API application
• REST API principles, CRUD operations
• Mongo DB NoSQL database connection and containerization
• N-Layer implementation with Repository Pattern
• Swagger Open API implementation
• Dockerfile implementation
At the end of article, we will have a Web API Microservice which
implemented CRUD operations over Product and Category documents on
MongoDB.
Background
You can follow the previous article which explains overall microservice
architecture of this example. We will focus on Catalog microservice from
that overall e-commerce microservice architecture.
Check for the previous article which explained overall microservice
architecture of this repository.
Step by Step Development w/ Udemy Course
Get Udemy Course with discounted — Microservices Architecture and
Implementation on .NET.
Source Code
Get the Source Code from AspnetRun Microservices Github — Clone or fork
this repository, if you like don’t forget the star. If you find or ask anything you
can directly open issue on repository.
Prerequisites
• Install the .NET Core 5 or above SDK
• Install Visual Studio 2019 v16.x or above
• Docker Desktop
MongoDB
MongoDB introduces us as an open source, document-oriented database
designed for ease of development and scaling. Every record in MongoDB is
actually a document. Documents are stored in MongoDB in JSON-like Binary
JSON (BSN) format. BSON documents are objects that contain an ordered list
of the elements they store. Each element consists of a domain name and a
certain type of value.
It is a document-based NoSQL database. It keeps the data structurally in Json
format and in documents. Queries can be written in any field or by range. If
we compare the structures in MongoDB with the structures in their
relational databases, it uses Collection instead of Tables and uses
Documents instead of rows.
Analysis & Design
This project will be the REST APIs which basically perform CRUD operations
on Catalog databases.
We should define our Catalog use case analysis. In this part we will create
Product — Category entities. Our main use case are Listing Products and
Categories, able to search products. Also performed CRUD operations on
Product entity.
Our main use cases;
• Listing Products and Categories
• Get Product with product Id
• Get Products with category
• Create new Product
• Update Product
• Delete Product
Along with this we should design our APIs according to REST perspective.
According the analysis, we are going to create swagger output of below;
Architecture of Catalog microservices
We are going to use traditional N-Layer architecture. Layered architecture
basically consists of 3 layers. These 3 layers are generally the ones that
should be in every project. You can create a layered structure with more than
these 3 layers, which is called multi-layered architecture.
Data Access Layer: Only database operations are performed on this layer.
The task of this layer is to add, delete, update and extract data from the
database. There is no other operation in this layer other than these
operations.
Business Layer: We implement business logics on this layer. This layer is will
process the data taken by Data Access into the project.
We do not use the Data Access layer directly in our applications.
The data coming from the user first goes to the Business layer, from there it
is processed and transferred to the Data Access layer.
Presentation Layer: This layer is the layer on which the user interacts.
It could be in Windows form, on the Web, or in a Console application.
The main purpose here is to show the data to the user and to transmit the
data from the user to the Business Layer and Data Access.
Simple Data Driven CRUD Microservice
Catalog.API microservices will be simple crud implementation on Product
data on Mongo databases.
You can apply any internal design pattern per microservices.
Since we have 1 project, so we are going to separate this layers with using
folders into the project.
But for the ordering microservices we also separate layers with projects with
using clean arch and CQRS implementation.
So we don't need to separate layers in different assemblies.
If we look at the project structure, we are planning to create this layers,
• Domain Layer — Contains business rules and logic.
• Application Layer — Expose endpoints and validations. API layer will be
Controller classes.
• Infrastructure Layer — responsible by persistence operations.
Project Folder Structure
• Entities — mongo entity
• Data — mongo data context
• Repositories — mongo repos
• Controllers — api classes
Database Setup with Docker
For Catalog microservices, we are going to use no-sql MongoDB database.
Setup Mongo Database
Here is the docker commands that basically download Mongo DB in your
local and use db collections.
In order to download mongo db from docker hub use below commands;
docker pull mongo
To run database on your docker environment use below command. It will
expose 27017 port in your local environment.
docker run -d -p 27017:27017 — name aspnetrun-mongo mongo
Starting Our Project
Create new web application with visual studio.
First, open File -> New -> Project. Select ASP.NET Core Web Application, give
your project a name and select OK.
In the next window, select .Net Core and ASP.Net Core latest version and
select Web API and then uncheck “Configure for HTTPS” selection and click
OK. This is the default Web API template selected. Unchecked for https
because we don’t use https for our api’s now.
Add New Web API project under below location and name;
src/catalog/Catalog.API
Library & Frameworks
For Catalog microservices, we have to libraries in our Nuget Packages,
1- Mongo.DB.Driver — To connect mongo database
2- Swashbuckle.AspNetCore — To generate swagger index page
Create Entities
Create Entities folder into your project. This will be the MongoDB collections
of your project. In this section, we will use the MongoDB Client when
connecting the database. That’s why we write the entity classes at first.
Add New Class -> Product
public class Product
{
[BsonId]
[BsonRepresentation(BsonType.ObjectId)]
public string Id { get; set; }
[BsonElement(“Name”)]
public string Name { get; set; }
public string Category { get; set; }
public string Summary { get; set; }
public string Description { get; set; }
public string ImageFile { get; set; }
public decimal Price { get; set; }
}
There is Bson annotations which provide to mark properties for the database
mapping. I.e. BsonId is primary key for Product collection.
Create Data Layer
Create Data folder into your project. This will be the MongoDB collections of
your project.
In order to manage these entities, we should create a data structure. To work
with a database, we will use this class with the MongoDb Client. In order to
wrapper this classes we will create that provide data access over the Context
classes.
To store these entities, we start with ICatalogContext interface.
Create ICatalogContext class.
public interface ICatalogContext
{
IMongoCollection<Product> Products { get; }
}
Basically, we expect from our db context object is Products collections.
Continue with implementation and create CatalogContext class.
public class CatalogContext : ICatalogContext
{
public CatalogContext(ICatalogDatabaseSettings settings)
{
var client = new MongoClient(settings.ConnectionString);
var database = client.GetDatabase(settings.DatabaseName);
Products = database.GetCollection<Product>(settings.CollectionName);
CatalogContextSeed.SeedData(Products);
}
public IMongoCollection<Product> Products { get; }
}
In this class, constructor initiate MongoDB connection with using
MongoClient library. And load the Products collection.
The code getting connection string from settings. In Asp.Net Core this
configuration stored appsettings.json file;
“CatalogDatabaseSettings”: {
“ConnectionString”: “mongodb://localhost:27017”,
“DatabaseName”: “CatalogDb”,
“CollectionName”: “Products”
},
At this point, you can put your configurations according to your dockerize
mongodb. Default port was 27017 that’s why we use same port.
Register DataContext into ASP.NET Dependency Injection
You should register this repository classes into ASP.NET Built-in
Dependency Injection engine. That means we should recognize these
classes into asp.net core in order to use from frontend side in web
application.
Open Startup.cs -> Go To Method ConfigureAspnetRunServices -> put your
dependencies;
Mongo DB context object should register in DI when starting the application.
Ensure that DbContext object into ConfigureServices method is configured
properly.
public void ConfigureServices(IServiceCollection services)
{
#region Project Dependencies
services.AddScoped<ICatalogContext, CatalogContext>();
#endregion
}
Create Business Layer
For the Business Logic Layer, we should create a new folder which name
could be the Service — Application — Manager — Repository in order to
manage business operations with using data access layer objects.
For the Business Logic Layer, we are using Repository folder in order to
manage business operations with using data access layer objects.
According to our main use cases we will create interface and
implementation classes in our business layer.
• Listing Products and Categories
• Get Product with product Id
• Get Products with category
• Create new Product
• Update Product
• Delete Product
So, let’s create/open a Repository folder and create a new interface to
IProductRepository class in order to manage Product related requests.
public interface IProductRepository
{
Task<IEnumerable<Product>> GetProducts();
Task<Product> GetProduct(string id);
Task<IEnumerable<Product>> GetProductByName(string name);
Task<IEnumerable<Product>> GetProductByCategory(string
categoryName);
Task CreateProduct(Product product);
Task<bool> UpdateProduct(Product product);
Task<bool> DeleteProduct(string id);
}
Let’s implement these interfaces with using Data layer objects. In our case
Data layer represents from Mongo Client library so we should use DBContext
object. In order to use CatalogContext object which represent us DB Layer,
the constructor should use dependency injection to inject the database
context(CatalogContext) into the ProductRepository class.
public class ProductRepository : IProductRepository
{
private readonly ICatalogContext _context;
public ProductRepository(ICatalogContext context)
{
_context = context ?? throw new
ArgumentNullException(nameof(context));
}
public async Task<IEnumerable<Product>> GetProducts()
{
return await _context
.Products
.Find(p => true)
.ToListAsync();
}
public async Task<Product> GetProduct(string id)
{
return await _context
.Products
.Find(p => p.Id == id)
.FirstOrDefaultAsync();
}
public async Task<IEnumerable<Product>> GetProductByName(string name)
{
FilterDefinition<Product> filter =
Builders<Product>.Filter.ElemMatch(p => p.Name, name);
return await _context
.Products
.Find(filter)
.ToListAsync();
}
public async Task<IEnumerable<Product>> GetProductByCategory(string
categoryName)
{
FilterDefinition<Product> filter =
Builders<Product>.Filter.Eq(p => p.Category, categoryName);
return await _context
.Products
.Find(filter)
.ToListAsync();
}
public async Task CreateProduct(Product product)
{
await _context.Products.InsertOneAsync(product);
}
public async Task<bool> UpdateProduct(Product product)
{
var updateResult = await _context
.Products
.ReplaceOneAsync(filter: g =>
g.Id == product.Id, replacement: product);
return updateResult.IsAcknowledged
&& updateResult.ModifiedCount > 0;
}
public async Task<bool> DeleteProduct(string id)
{
FilterDefinition<Product> filter =
Builders<Product>.Filter.Eq(p => p.Id, id);
DeleteResult deleteResult = await _context
.Products
.DeleteOneAsync(filter);
return deleteResult.IsAcknowledged
&& deleteResult.DeletedCount > 0;
}
}
Basically, In ProductRepository class, we managed all business-related actions
with using CatalogContext object. You can put all business logics into these
functions in order to manage one place.
Don’t forget to add below references into your repository implementations
class;
using MongoDB.Driver;
By this library, we use Mongo operations over the Products collections.
(Find, InsertOne, ReplaceOne, DeleteOne methods)
Register Repository into ASP.NET Dependency Injection
You should register this repository classes into ASP.NET Built-in
Dependency Injection engine. That means we should recognize these
classes into asp.net core in order to use from frontend side in web
application.
Open Startup.cs -> Go To Method ConfigureAspnetRunServices -> put your
dependencies;
public void ConfigureServices(IServiceCollection services)
{
…
#region Project Dependencies
services.AddScoped<ICatalogContext, CatalogContext>();
services.AddScoped<IProductRepository, ProductRepository>();
#endregion
}
Create Presentation Layer
Since created a Web API template for ASP.NET Core project, the presentation
layer will be Controller classes which produce API layer.
Locate the Controller folder and create CatalogController class.
[ApiController]
[Route("api/v1/[controller]")]
public class CatalogController : ControllerBase
{
private readonly IProductRepository _repository;
private readonly ILogger<CatalogController> _logger;
public CatalogController(IProductRepository repository,
ILogger<CatalogController> logger)
{
_repository = repository ?? throw new
ArgumentNullException(nameof(repository));
_logger = logger ?? throw new
ArgumentNullException(nameof(logger));
}
[HttpGet]
[ProducesResponseType(typeof(IEnumerable<Product>),
(int)HttpStatusCode.OK)]
public async Task<ActionResult<IEnumerable<Product>>>
GetProducts()
{
var products = await _repository.GetProducts();
return Ok(products);
}
[HttpGet("{id:length(24)}", Name = "GetProduct")]
[ProducesResponseType((int)HttpStatusCode.NotFound)]
[ProducesResponseType(typeof(Product),
(int)HttpStatusCode.OK)]
public async Task<ActionResult<Product>>
GetProductById(string id)
{
var product = await _repository.GetProduct(id);
if (product == null)
{
_logger.LogError($"Product with id: {id}, not
found.");
return NotFound();
}
return Ok(product);
}
[Route("[action]/{category}", Name = "GetProductByCategory")]
[HttpGet]
[ProducesResponseType(typeof(IEnumerable<Product>),
(int)HttpStatusCode.OK)]
public async Task<ActionResult<IEnumerable<Product>>>
GetProductByCategory(string category)
{
var products = await
_repository.GetProductByCategory(category);
return Ok(products);
}
[HttpPost]
[ProducesResponseType(typeof(Product),
(int)HttpStatusCode.OK)]
public async Task<ActionResult<Product>>
CreateProduct([FromBody] Product product)
{
await _repository.Create(product);
return CreatedAtRoute("GetProduct", new { id = product.Id },
product);
}
[HttpPut]
[ProducesResponseType(typeof(Product),
(int)HttpStatusCode.OK)]
public async Task<IActionResult> UpdateProduct([FromBody]
Product product)
{
return Ok(await _repository.Update(product));
}
[HttpDelete("{id:length(24)}", Name = "DeleteProduct")]
[ProducesResponseType(typeof(Product),
(int)HttpStatusCode.OK)]
public async Task<IActionResult> DeleteProductById(string
id)
{
return Ok(await _repository.Delete(id));
}
}
In this class we are creating API with data through business layer. Before we
should pass IProductRepository class into constructor of class in order to
use repository related functions inside of the API calls.
API Routes in Controller Classes
In Controller class can manage to provide below routes as intended methods
in CatalogController.cs.
Along with this we developed our APIs according below list.
Swagger Implementation
Swagger is dynamic used by the software world is a widely used dynamic
document creation tool that is widely accepted. Its implementation within
.Net Core projects is quite simple.
Implementation of Swagger
1- Let’s download and install the Swashbuckle.AspNetCore package to the
web api project via nuget.
2- Let’s add the swagger as a service in the ConfigureServices method in the
Startup.cs class of our project.
public void ConfigureServices(IServiceCollection services)
{
…
#region Swagger Dependencies
services.AddSwaggerGen(c =>
{
c.SwaggerDoc(“v1”, new OpenApiInfo { Title = “Catalog API”, Version
= “v1” });
});
#endregion
}
3- After that we will use this added service in the Configure method in
Startup.cs.
public void Configure(IApplicationBuilder app, IWebHostEnvironment
env)
{
…
app.UseSwagger();
app.UseSwaggerUI(c =>
{
c.SwaggerEndpoint(“/swagger/v1/swagger.json”, “Catalog API V1”);
});
}
Run Application
Now the Catalog microservice Web API application ready to run.
Before running the application, configure the debug profile;
Right Click the project File and Select to Debug section.
Change Launch browser to swagger
Change the App URL to http://localhost:5000
Hit F5 on Catalog.API project.
Exposed the Product APIs in our Catalog Microservices, you can test it over
the Swagger GUI.
You can also test it over the Postman as below way.
Above image is example of test Get Catalog method.
Run Application on Docker with Database
Since here we developed ASP.NET Core Web API project for Catalog
microservices. Now it’s time to make docker the Catalog API project with our
MongoDB.
Add Docker Compose and Dockerfile
Normally you can add only Dockerfile for make dokerize the Web API
application but we will integrate our API project with MongoDB docker
image, so we should create docker-compose file with Dockerfile of API
project.
Right Click to Project -> Add -> ..Container Orchestration Support
Continue with default values.
Dockerfile and docker-compose files are created.
Docker-compose.yml is a command-line file used during development and
testing, where necessary definitions are made for multi-container running
applications.
Docker-compose.yml
version: ‘3.4’
services:
catalogdb:
image: mongo
catalog.api:
image: ${DOCKER_REGISTRY-}catalogapi
build:
context: .
Dockerfile: src/Catalog/Catalog.API/Dockerfile
Docker-compose.override.yml
version: ‘3.4’
services:
catalogdb:
container_name: catalogdb
restart: always
volumes:
— ${WEBAPP_STORAGE_HOME}/site:/data/db
ports:
— “27017:27017”
catalog.api:
container_name: catalogapi
environment:
— ASPNETCORE_ENVIRONMENT=Development
— “CatalogDatabaseSettings:ConnectionString=mongodb://catalogdb:
27017”
depends_on:
— catalogdb
volumes:
— ${HOME}/.microsoft/usersecrets/:/root/.microsoft/usersecrets
— ${HOME}/.aspnet/https:/root/.aspnet/https/
ports:
— “8000:80”
Basically in docker-compose.yml file, created 2 image 1 is for mongoDb
which name is catalogdb, 2 is web api project which name is catalog.api.
After that we configure these images into docker-compose.override.yml
file.
In override file said that;
• Catalogdb which is mongo database will be open 27017 port.
• Catalog.api which is our developed web API project depend on catalogdb
and open port on 8000 and we override to connection string with
catalogdb.
Run below command on top of project folder which include dockercompose.yml files.
docker-compose -f docker-compose.yml -f docker-compose.override.yml up
–build
That’s it!
You can check microservices as below urls :
Catalog API -> http://localhost:8000/swagger/index.html
SEE DATA with Test over Swagger
/api/v1/Catalog
Get Udemy Course with discounted — Microservices Architecture and
Implementation on .NET.
Get the Source Code from AspnetRun Microservices Github
Follow Series of Microservices Articles
This is the introduction of the series. This will be the series of articles. You
can follow the series with below links.
• 0- Microservices Architecture on .NET with applying CQRS, Clean
Architecture and Event-Driven Communication
• 1- Microservice Using ASP.NET Core, MongoDB and Docker Container
• 2- Using Redis with ASP.NET Core, and Docker Container for Basket
Microservices
• 3- Using PostgreSQL and Dapper with ASP.NET and Docker Container for
Discount Microservices
• 4- Building Ocelot API Gateway Microservices with ASP.NET Core and
Docker Container
• 5- Microservices Event Driven Architecture with RabbitMQ and Docker
Container on .NET
• 6- CQRS and Event Sourcing in Event Driven Architecture of Ordering
Microservices
• 7- Microservices Cross-Cutting Concerns with Distributed Logging and
Microservices Resilience
• 8- Securing Microservices with IdentityServer4 with OAuth2 and
OpenID Connect fronted by Ocelot API Gateway
• 9- Using gRPC in Microservices for Building a high-performance
Interservice Communication with .Net 5
• 10-Deploying .Net Microservices to Azure Kubernetes Services(AKS) and
Automating with Azure DevOps
Microservices
Aspnetcore
Mongodb
Docker
Swagger
Sign up
Open in app
Search
Write
Using Redis with ASP.NET Core, and
Docker Container for Basket
Microservices
Mehmet Ozkaya · Follow
Published in aspnetrun · 13 min read · May 15, 2020
48
2
Building Basket Microservice on .Net platforms which used Asp.Net Web API,
Docker, Redis and Swagger. Test microservice with using Postman.
Sign in
Introduction
In this article we will show how to perform Basket Microservices operations
on ASP.NET Core Web application using Redis, Docker Container and
Swagger.
By the end of the article, we will have a Web API which implemented CRUD
operations over Basket and BasketItem objects. These objects will be store in
Redis as a cache value so in our case we will use Redis for NoSQL database.
Look at the final swagger application.
Developing Basket microservice which includes;
• ASP.NET Core Web API application
• REST API principles, CRUD operations
• Redis DB NoSQL database connection and containerization
• N-Layer implementation with Repository Pattern
• Swagger Open API implementation
• Dockerfile implementation
In the upcomming articles :
• Consume Discount Grpc Service for inter-service sync communication to
calculate product final price
• Publish BasketCheckout Queue with using MassTransit and RabbitMQ
At the end, you’ll have a working Web API Microservice running on your
local machine.
Background
You can follow the previous article which explains overall microservice
architecture of this example.
Check for the previous article which explained overall microservice
architecture of this repository.
We will focus on Basket microservice from that overall e-commerce
microservice architecture.
Step by Step Development w/ Udemy Course
Get Udemy Course with discounted — Microservices Architecture and
Implementation on .NET.
Source Code
Get the Source Code from AspnetRun Microservices Github — Clone or fork
this repository, if you like don’t forget the star. If you find or ask anything you
can directly open issue on repository.
Prerequisites
• Install the .NET Core 5 or above SDK
• Install Visual Studio 2019 v16.x or above
• Docker Desktop
Redis
Redis, an abbreviation of Remote Dictionary Server expression; It positions
itself as a data structure server. Redis is an open source NoSQL database that
originally holds memory.
Redis is not just a simple key-value server. One of the most important
differences among other alternatives is that Redis ability to store and use
high level data structures.
As mentioned in the definition of Redis own documentation, redis is not just
a simple key-value server. One of the major differences among other
alternatives is the ability of Redis to store and use high-level data structures.
These data structures are the basic data that most developers are familiar
with. structures (list, map, set).
Advantages
It is extremely fast because it works synchronously. It supports many data
types. It can save data both on RAM and on disk according to the
configuration you set. Since it records on the disc, it continues to work with
the same data after restart. It has many enterprise features such as Sharding,
Cluster, Sentinel, Replication.
Disadvantages
Since it does not work asynchronously, you may not be able to reach the
performance that asynchronous alternatives reach on a single instance. You
will need RAM according to your data size. It does not support complex
queries like relational databases. If a transaction receives an error, there is
no return.
Analysis & Design
This project will be the REST APIs which basically perform CRUD operations
on Basket databases.
We should define our Basket use case analysis. In this part we will create
Basket — BasketItem entities. Our main use case are Listing Basket and
Items, able to add new item to basket. Also performed CRUD operations on
Basket entity.
Our main use cases;
• Get Basket and Items with username
• Update Basket and Items (add — remove item on basket)
• Delete Basket
• Checkout Basket
Along with this we should design our APIs according to REST perspective.
According the analysis, we are going to create swagger output of below;
Architecture of Basket microservices
We are going to use traditional N-Layer architecture. Layered architecture
basically consists of 3 layers. These 3 layers are generally the ones that
should be in every project. You can create a layered structure with more than
these 3 layers, which is called multi-layered architecture.
Data Access Layer: Only database operations are performed on this layer.
The task of this layer is to add, delete, update and extract data from the
database. There is no other operation in this layer other than these
operations.
Business Layer: We implement business logics on this layer. This layer is will
process the data taken by Data Access into the project.
We do not use the Data Access layer directly in our applications.
The data coming from the user first goes to the Business layer, from there it
is processed and transferred to the Data Access layer.
Presentation Layer: This layer is the layer on which the user interacts.
It could be in Windows form, on the Web, or in a Console application.
The main purpose here is to show the data to the user and to transmit the
data from the user to the Business Layer and Data Access.
Simple Data Driven CRUD Microservice
Basket.API microservices will be simple crud implementation on Basket data
on Redis databases.
You can apply any internal design pattern per microservices. Since we have 1
project, so we are going to separate this layers with using folders into the
project.
But for the ordering microservices we also separate layers with projects with
using clean arch and CQRS implementation.
So we don’t need to separate layers in different assemblies.
If we look at the project structure, we are planning to create this layers,
• Domain Layer — Contains business rules and logic.
• Application Layer — Expose endpoints and validations. API layer will be
Controller classes.
• Infrastructure Layer — responsible by persistence operations.
Project Folder Structure
• Entities — Redis entity
• Data — Redis data context
• Repositories — Redis repos
• Controllers — api classes
Database Setup with Docker
For Basket microservices, we are going to use no-sql Redis database.
Setup Redis Database
Here is the docker commands that basically download Redis DB in your local
and use db collections.
In order to download redis db from docker hub use below commands;
docker pull redis
To run database on your docker environment use below command. It will
expose 6379 port in your local environment.
docker run -d -p 6379:6379 — name aspnetrun-redis redis
Starting Our Project
Create new web application with visual studio.
First, open File -> New -> Project. Select ASP.NET Core Web Application, give
your project a name and select OK.
In the next window, select .Net Core and ASP.Net Core latest version and
select Web API and then uncheck “Configure for HTTPS” selection and click
OK. This is the default Web API template selected. Unchecked for https
because we don’t use https for our api’s now.
Add New Web API project under below location and name;
src/catalog/Basket.API
Library & Frameworks
For Basket microservices, we have to libraries in our Nuget Packages,
• Microsoft.Extensions.Caching.StackExchangeRedis — To connect redis
database
• Newtonsoft.Json — To parse json objects
• Swashbuckle.AspNetCore — To generate swagger index page
Create Entities
Create Entities folder into your project. This will be the Redis collections of
your project. In this section, we will use the Redis Client when connecting
the database. That’s why we write the entity classes at first.
Create “Entities” folder
Entities -> Add ShoppingCart and ShoppingCartItem class
using System.Collections.Generic;
namespace Basket.API.Entities
{
public class ShoppingCart
{
public string UserName { get; set; }
public List<ShoppingCartItem> Items { get; set; } = new
List<ShoppingCartItem>();
public ShoppingCart()
{
}
public ShoppingCart(string userName)
{
UserName = userName;
}
public decimal TotalPrice
{
get
{
decimal totalprice = 0;
foreach (var item in Items)
{
totalprice += item.Price * item.Quantity;
}
return totalprice;
}
}
}
}
ShoppingCartItem.cs
namespace Basket.API.Entities
{
public class ShoppingCartItem
{
public int Quantity { get; set; }
public string Color { get; set; }
public decimal Price { get; set; }
public string ProductId { get; set; }
public string ProductName { get; set; }
}
}
Connect Redis Docker Container from Basket.API Microservice
with AddStackExchangeRedisCache into DI
We are going to Connect Redis Docker Container from Basket.API
Microservice. This will be the Redis collections of your project.
In order to manage these entities, we should create a data structure. To work
with a database, we will use this class with the Redis Client.
After that, we have to configure our application to support the Redis cache
and specify the port at which Redis is available. To do this, navigate to
Startup.cs/ConfigureServices method and add the following.
Startup.cs/ConfigureServices
services.AddStackExchangeRedisCache(options =>
{
options.Configuration = “localhost:6379”;
});
It is good to get this url from the configuration.
Move Configuration appsettings.json
“CacheSettings”: {
“ConnectionString”: “localhost:6379”
},
— Update startup with configuration
services.AddStackExchangeRedisCache(options =>
{
options.Configuration =
Configuration.GetValue<string>(“CacheSettings:ConnectionString”);
});
Now it is ready to use with IDistributedCache.
— With this AddStackExchangeRedisCache extention method, it is provide
to inject any class with IDistributedCache and create an instance for us.
We are going to inject this class in a repository classes.
Create Business Layer
For the Business Logic Layer, we should create a new folder which name
could be the Service — Application — Manager — Repository in order to
manage business operations with using data access layer objects.
For the Business Logic Layer, we are using Repository folder in order to
manage business operations with using data access layer objects.
The name of the Repository, would not be appreciate for Business Layer but
we are building one solution that’s why we select this name, you can modify
with name of Service, Application, Manager, Helper etc.
According to our main use cases we will create interface and
implementation classes in our business layer.
• Get Basket and Items with username
• Update Basket and Items (add — remove item on basket)
• Delete Basket
• Checkout Basket
So, let’s create/open a Repository folder and create a new interface to
IBasketRepository class in order to manage Basket related requests.
public interface IBasketRepository
{
Task<BasketCart> GetBasket(string userName);
Task<BasketCart> UpdateBasket(BasketCart basket);
Task<bool> DeleteBasket(string userName);
}
Implementation of Repository interface should be as below code;
using Basket.API.Entities;
using Basket.API.Repositories.Interfaces;
using Microsoft.Extensions.Caching.Distributed;
using Newtonsoft.Json;
using System;
using System.Threading.Tasks;
namespace Basket.API.Repositories
{
public class BasketRepository : IBasketRepository
{
private readonly IDistributedCache _redisCache;
public BasketRepository(IDistributedCache cache)
{
_redisCache = cache ?? throw new
ArgumentNullException(nameof(cache));
}
public async Task<ShoppingCart> GetBasket(string userName)
{
var basket = await _redisCache.GetStringAsync(userName);
if (String.IsNullOrEmpty(basket))
return null;
return JsonConvert.DeserializeObject<ShoppingCart>(basket);
}
public async Task<ShoppingCart> UpdateBasket(ShoppingCart
basket)
{
await _redisCache.SetStringAsync(basket.UserName,
JsonConvert.SerializeObject(basket));
return await GetBasket(basket.UserName);
}
public async Task DeleteBasket(string userName)
{
await _redisCache.RemoveAsync(userName);
}
}
}
Let me try to explain this methods,
— basically we are using “IDistributedCache” object as a redis cache.
Because in the startup class, we have configured redis as a distributed cache.
— So when we inject this interface, this will create an redis cache instance.
— After that with using “IDistributedCache” object almost every Redis CLI
commands include in this context class as a method members.
— so we have implemented this methods into our api requirements. For
example GetStringAsync , SetStringAsync..
— We have used JsonConvert in order to save and extract json objects from
redis cache databases. So our basket and basket item structure saving to
redis a a json objects.
Create Presentation Layer
Since created a Web API template for ASP.NET Core project, the presentation
layer will be Controller classes which produce API layer.
Locate the Controller folder and create BasketController class.
using Basket.API.Entities;
using Basket.API.Repositories.Interfaces;
using Microsoft.AspNetCore.Mvc;
using System;
using System.Net;
using System.Threading.Tasks;
namespace Basket.API.Controllers
{
[ApiController]
[Route("api/v1/[controller]")]
public class BasketController : ControllerBase
{
private readonly IBasketRepository _repository;
public BasketController(IBasketRepository repository)
{
_repository = repository ?? throw new
ArgumentNullException(nameof(repository));
}
[HttpGet("{userName}", Name = "GetBasket")]
[ProducesResponseType(typeof(ShoppingCart),
(int)HttpStatusCode.OK)]
public async Task<ActionResult<ShoppingCart>>
GetBasket(string userName)
{
var basket = await _repository.GetBasket(userName);
return Ok(basket ?? new ShoppingCart(userName));
}
[HttpPost]
[ProducesResponseType(typeof(ShoppingCart),
(int)HttpStatusCode.OK)]
public async Task<ActionResult<ShoppingCart>>
UpdateBasket([FromBody] ShoppingCart basket)
{
return Ok(await _repository.UpdateBasket(basket));
}
[HttpDelete("{userName}", Name = "DeleteBasket")]
[ProducesResponseType(typeof(void), (int)HttpStatusCode.OK)]
public async Task<IActionResult> DeleteBasket(string
userName)
{
await _repository.DeleteBasket(userName);
return Ok();
}
}
}
we have injected IBasketRepository object and use this object when
exposing apis.
— you can see that we have exposed crud api methods over the Basket
Controller.
— Also you can see the anotations.
ProducesResponseType — provide to restrictions on api methods. produce
only including types of objects.
Route — provide to define custom routes. Mostly using if you have one more
the same http method.
httpget/put/post — send http methods.
API Routes in Controller Classes
In Controller class can manage to provide below routes as intended methods
in BasketController.cs.
Along with this we should design our APIs according to REST perspective.
Swagger Implementation
Swagger is dynamic used by the software world is a widely used dynamic
document creation tool that is widely accepted. Its implementation within
.Net Core projects is quite simple.
Implementation of Swagger
1- Let’s download and install the Swashbuckle.AspNetCore package to the
web api project via nuget.
2- Let’s add the swagger as a service in the ConfigureServices method in the
Startup.cs class of our project.
public void ConfigureServices(IServiceCollection services)
{
…
#region Swagger Dependenciesservices.AddSwaggerGen(c =>
{
c.SwaggerDoc(“v1”, new OpenApiInfo { Title = “Catalog API”, Version
= “v1” });
});#endregion
}
3- After that we will use this added service in the Configure method in
Startup.cs.
public void Configure(IApplicationBuilder app, IWebHostEnvironment
env)
{
…app.UseSwagger();
app.UseSwaggerUI(c =>
{
c.SwaggerEndpoint(“/swagger/v1/swagger.json”, “Catalog API V1”);
});
}
Register Redis into AspNet Built-in Dependency Injection Tool
After we have developed the Controller, the most important part is
registering objects into aspnet built-in dependency injection tool.
— So in our case, Controller class uses Repository object and Repository
object uses IDistributedCache objects. That means we should register both
repostiory and IDistributedCache object.
Go to Startup.cs
Register Repository into DI Service collections.
Add Startup Configurations
public void ConfigureServices(IServiceCollection services)
{
services.AddStackExchangeRedisCache(options =>
{
options.Configuration =
Configuration.GetValue<string>(“DatabaseSettings:ConnectionString”);
});
services.AddScoped<IBasketRepository, BasketRepository>();
— For IDistributedCache, we have already registered with
AddStackExchangeRedisCache extention method.
Run Application
Now the Basket microservice Web API application ready to run.
Before running the application, configure the debug profile;
Right Click the project File and Select to Debug section.
Change Launch browser to swagger
Change the App URL to http://localhost:5001
Hit F5 on Basket.API project.
Exposed the Product APIs in our Catalog Microservices, you can test it over
the Swagger GUI.
You can also test it over the Postman as below way.
Run Application on Docker with Database
Since here we developed ASP.NET Core Web API project for Catalog
microservices. Now it’s time to make docker the Basket API project with our
Redis.
Add Docker Compose and Dockerfile
Normally you can add only Dockerfile for make dokerize the Web API
application but we will integrate our API project with MongoDB docker
image, so we should create docker-compose file with Dockerfile of API
project.
Right Click to Project -> Add -> ..Container Orchestration Support
Continue with default values.
Dockerfile and docker-compose files are created.
Docker-compose.yml is a command-line file used during development and
testing, where necessary definitions are made for multi-container running
applications.
Docker-compose.yml
version: ‘3.4’
services:
basketdb:
image: redis
basket.api:
image: ${DOCKER_REGISTRY-}basketapi
build:
context: .
dockerfile: src/Basket/Basket.API/Dockerfile
Basically in docker-compose.yml file, created 2 image 1 is for Redis which
name is basketdb, 2 is web api project which name is basket.api.
After that we configure these images into docker-compose.override.yml
file.
Run below command on top of project folder which include dockercompose.yml files.
docker-compose -f docker-compose.yml -f docker-compose.override.yml up
–build
That’s it!
You can check microservices as below urls :
Basket API -> http://localhost:8001/swagger/index.html
SEE DATA with Test over Swagger
/api/v1/Basket
Get Udemy Course with discounted — Microservices Architecture and
Implementation on .NET.
Get the Source Code from AspnetRun Microservices Github
Follow Series of Microservices Articles
This is the introduction of the series. This will be the series of articles. You
can follow the series with below links.
• 0- Microservices Architecture on .NET with applying CQRS, Clean
Architecture and Event-Driven Communication
• 1- Microservice Using ASP.NET Core, MongoDB and Docker Container
• 2- Using Redis with ASP.NET Core, and Docker Container for Basket
Microservices
• 3- Using PostgreSQL and Dapper with ASP.NET and Docker Container for
Discount Microservices
• 4- Building Ocelot API Gateway Microservices with ASP.NET Core and
Docker Container
• 5- Microservices Event Driven Architecture with RabbitMQ and Docker
Container on .NET
• 6- CQRS and Event Sourcing in Event Driven Architecture of Ordering
Microservices
• 7- Microservices Cross-Cutting Concerns with Distributed Logging and
Microservices Resilience
• 8- Securing Microservices with IdentityServer4 with OAuth2 and
OpenID Connect fronted by Ocelot API Gateway
• 9- Using gRPC in Microservices for Building a high-performance
Interservice Communication with .Net 5
• 10-Deploying .Net Microservices to Azure Kubernetes Services(AKS) and
Automating with Azure DevOps
Redis
Microservices
Docker
Written by Mehmet Ozkaya
Aspnetcore
Swagger
Follow
Search
Microservices Using ASP.NET,
PostgreSQL, Dapper Micro-ORM
and Docker Container
Mehmet Ozkaya · Follow
Published in aspnetrun · 12 min read · Apr 1, 2021
119
1
Building Discount Microservice on .Net platforms which used Asp.Net Web API,
Docker, PostgreSQL, Dapper Micro-ORM and Swagger. Test microservice with
Write
using Postman.
Introduction
In this article we will show how to perform Discount Microservices
operations on ASP.NET Core Web application using PostgreSQL, Dapper
micro-orm, Docker Container and Swagger.
By the end of the article, we will have a Web API which implemented CRUD
operations over Coupon objects. These objects will be store in PostgreSQL
database and retrieved data with using Dapper micro-orm tool.
Developing Discount microservice which includes;
• ASP.NET Web API application
• REST API principles, CRUD operations
• PostgreSQL database connection and containerization
• Repository Pattern implementation
• Using Dapper for micro-orm implementation to simplify data access and
ensure high performance
• PostgreSQL database connection and containerization
We will Analysis and Architecting of Discount Microservices, applying NLayer architecture. Containerize Discount Microservices with PostgreSQL
database using Docker Compose.
In the upcoming sections :
• ASP.NET Grpc Server application
• Build a Highly Performant inter-service gRPC Communication with
Discount Microservice
• Exposing Grpc Services with creating Protobuf messages
So in this section, we are going to Develop Discount.API Microservices with
PostgreSQL.
Background
You can follow the previous article which explains overall microservice
architecture of this example.
Check for the previous article which explained overall microservice
architecture of this repository.
We will focus on Discount microservice from that overall e-commerce
microservice architecture.
Step by Step Development w/ Udemy Course
Get Udemy Course with discounted — Microservices Architecture and
Implementation on .NET.
Source Code
Get the Source Code from AspnetRun Microservices Github — Clone or fork
this repository, if you like don’t forget the star. If you find or ask anything you
can directly open issue on repository.
Prerequisites
• Install the .NET Core 5 or above SDK
• Install Visual Studio 2019 v16.x or above
• Docker Desktop
PostgreSQL in Discount Microservices
PostgreSQL is an open source and completely free object relational database
system with powerful features and advantages. Taking advantage of the
security, storability and scalability features of the SQL language, PostgreSQL
is also used as a database manager in many areas.
PostgreSQL is one of the most accepted database management systems in
the industry today. Because it offers users the advantages of successful data
architecture, data accuracy, powerful feature set, and open source.
PostgreSQL is supported by many major operating systems such as UNIX,
Linux, MacOS and Windows.
In terms of performance, PostgreSQL has been found to be more successful
compared to other commercial or open source databases.
In the face of some database systems, it is fast in some areas, but slow in
others.
Compared to PostgreSQL, MySQL and databases in the same class, it is
slower in INSERT / UPDATE transactions because it works transactionbased.
In some cases, PostgreSQL and MySQL have significant advantages in terms
of features, reliability and flexibility. Aside from the highlights of
PostgreSQL, this service is offered completely free of charge by open source
developers.
For Discount microservices, we are going to store discount coupon data
information into PostgreSQL database in a Coupon table.
Analysis & Design
This project will be the REST APIs which basically perform CRUD operations
on Basket databases.
We should define our Discount use case analysis. In this part we will create
Coupon entities. Our main use case are Get Coupon by ProductName, able to
add new item to Discount. Also performed CRUD operations on Coupom
entity.
Our main use cases;
• Get Coupon with productname
• Update Coupon
• Delete Coupon
• Checkout Coupon
Architecture of Discount microservices
We are going to use traditional N-Layer architecture. Layered architecture
basically consists of 3 layers. These 3 layers are generally the ones that
should be in every project. You can create a layered structure with more than
these 3 layers, which is called multi-layered architecture.
Data Access Layer: Only database operations are performed on this layer.
The task of this layer is to add, delete, update and extract data from the
database. There is no other operation in this layer other than these
operations.
Business Layer: We implement business logics on this layer. This layer is will
process the data taken by Data Access into the project.
We do not use the Data Access layer directly in our applications.
The data coming from the user first goes to the Business layer, from there it
is processed and transferred to the Data Access layer.
Presentation Layer: This layer is the layer on which the user interacts.
It could be in Windows form, on the Web, or in a Console application.
The main purpose here is to show the data to the user and to transmit the
data from the user to the Business Layer and Data Access.
Simple Data Driven CRUD Microservice
Discount.API microservices will be simple crud implementation on Coupom
data on PostgreSQL databases.
You can apply any internal design pattern per microservices. Since we have 1
project, so we are going to separate this layers with using folders into the
project.
But for the ordering microservices we also separate layers with projects with
using clean arch and CQRS implementation.
So we don’t need to separate layers in different assemblies.
If we look at the project structure, we are planning to create this layers,
• Domain Layer — Contains business rules and logic.
• Application Layer — Expose endpoints and validations. API layer will be
Controller classes.
• Infrastructure Layer — responsible by persistence operations.
Project Folder Structure
• Entities — PostgreSQL entity
• Data — PostgreSQL data context
• Repositories — PostgreSQL repos
• Controllers — api classes
Setup PostgreSQL Docker Database for Discount.API
Microservices
We are going to Setup PostgreSQL Docker Database for Discount.API
Microservices.
First, We should go to DockerHub and find Postgres official image.
Postgres DockerHub;
https://hub.docker.com/_/postgres
This time we are going to write directly on docker-compose file.
docker-compose.yml
version: ‘3.4’
services:
catalogdb:
image: mongo
basketdb:
image: redis:alpine
discountdb: — — ADDED
image: postgres
…
volumes:
portainer_data:
postgres_data: — — ADDED
—
Check environment variables - https://hub.docker.com/_/postgres
pgdata username etc..
—
docker-compose.override.yml
discountdb:
container_name: discountdb
environment:
— POSTGRES_USER=admin
— POSTGRES_PASSWORD=admin1234
— POSTGRES_DB=DiscountDb
restart: always
ports:
— “5432:5432”
volumes:
— postgres_data:/var/lib/postgresql/data/
We have added postgresql database in our dc file. We have not started yet,
because we also need to manage portal of postresql.
Setup pgAdmin Management Portal for PostgreSQL Database
for Discount.API Microservices
We are going to Setup pgAdmin Manage Portal for PostgreSQL Database for
Discount.API Microservices.
pgAdmin is one of the popular and feature rich Open Source administration
and development platform for PostgreSQL.
We will use pgAdmin for managing PostgreSQL Discount database creation
and add same records into that table for Discount microservices.
We should go to DockerHub and find pgAdmin image.
— now its time to add manage portal of postresql whic is pgadmin.
Check env variables
https://www.pgadmin.org/docs/pgadmin4/latest/container_deployment.html
Run a simple container over port 80, setting some configuration options:
docker pull dpage/pgadmin4
docker run -p 80:80 \
-e ‘PGADMIN_DEFAULT_EMAIL=user@domain.com’ \
-e ‘PGADMIN_DEFAULT_PASSWORD=SuperSecret’ \
-e ‘PGADMIN_CONFIG_ENHANCED_COOKIE_PROTECTION=True’ \
-e ‘PGADMIN_CONFIG_LOGIN_BANNER=”Authorised users only!”’ \
-e ‘PGADMIN_CONFIG_CONSOLE_LOG_LEVEL=10’ \
-d dpage/pgadmin4
After getting these informations, its time to add this into our docker-
compose file with postgreSQL
write dc
docker-compose.yml
version: ‘3.4’
…
discountdb:
image: postgres
pgadmin: — — — — — — — — — — ADDED
image: dpage/pgadmin4
volumes:
portainer_data:
postgres_data:
pgadmin_data: — — — — — — — ADDED
docker-compose.override.yml
pgadmin:
container_name: pgadmin
environment:
— PGADMIN_DEFAULT_EMAIL=admin@aspnetrun.com
— PGADMIN_DEFAULT_PASSWORD=admin1234
restart: always
ports:
— “5050:80”
volumes:
— pgadmin_data:/root/.pgadmin
So now we had postgres database and also pgadmin management tool of
postgres.
RUN with below command on that location;
docker-compose -f docker-compose.yml -f docker-compose.override.yml
up -d
docker-compose -f docker-compose.yml -f docker-compose.override.yml
down
Check portainer containers
localhost:9000
Check pgAdmin
localhost:5050
Login system with
— PGADMIN_DEFAULT_EMAIL=admin@aspnetrun.com
— PGADMIN_DEFAULT_PASSWORD=admin1234
After taht we can manage our postgresql database with adding new server
and connect to our discountdb database.
Add New Server
General
name — DiscountServer
Connection
hostname — discountdb — this should be the docker container name of
postresql
username — admin
password — admin1234
See dc-override of postresql
discountdb:
container_name: discountdb
environment:
— POSTGRES_USER=admin
— POSTGRES_PASSWORD=admin1234
See “DiscountDb” database under servers.
You can create tables under schemas.
As you can see that, we have added pgAdmin management portal of
postgresql database in our dc file. And run our docker-compose file and see
that we can manage our postgresql database in pgAdmin portal like adding
new server and see our Discount db.
Create Coupon Table in the DiscountDb of PostgreSQL
Database with pgAdmin Management Portal
We are going to Create Coupon Table in the DiscountDb of PostgreSQL
Database with pgAdmin Management.
See “DiscountDb” database under servers.
You can create tables under schemas.
Open in pgadmin
Tools — Query Tool
Create Table with below script
Tools — Query Tool
CREATE TABLE Coupon(
ID SERIAL PRIMARY KEY NOT NULL,
ProductName VARCHAR(24) NOT NULL,
Description TEXT,
Amount INT
);
Now our Postresql database and table is ready and loaded with predefined
data.
Developing Discount.API Microservices Creating Entities
We should create entity objects for redis definitions.
Create “Entities” folder
Entities
Add Coupon class
namespace Discount.API.Entities
{
public class Coupon
{
public int Id { get; set; }
public string ProductName { get; set; }
public string Description { get; set; }
public int Amount { get; set; }
}
}
Developing Repository Pattern Connect PostgreSQL use
Dapper on Discount.API Microservice
We are going to Developing Repository Pattern Connect PostgreSQL use
Dapper on Discount.API Microservice.
We will encapsulate data related objects in the repository classes by this way
there is an abstraction with the repository layer.
First of all, we should install required nuget packages
Install Nuget Package
Open Package Manager Console — PMC
Select Project — Discount.API
Run Command:
Install-Package Npgsql
Install-Package Dapper
If required Update Packages
Update-Package -ProjectName Discount.API
Dapper
Dapper is a Micro-ORM (Object Relationship Mapper) tool developed by the
Stack Overflow team as LightWeight, published open source on Github. It
has support for most databases (SQL Server, MySQL, PosgreSQL so on).
In Ado.Net, we perform our queries or procedures using SqlDataReader,
SqlCommand etc. objects. Dapper takes the burden of writing these objects
from us.
We can do our filtering by using generic and extension methods. By writing
less code, we can execute our queries in a short time and convert them to the
type we want. The biggest feature is it works very fast speed and very close to
Ado.Net speed.
Create Business Layer
After that, We should create Create “Repositories” and IDiscountRepository
interface.
Repositories Folder
Add Interface — IDiscountRepository
namespace Discount.API.Repositories.Interfaces
{
public interface IDiscountRepository
{
Task<Coupon> GetDiscount(string productName);
Task<bool> CreateDiscount(Coupon coupon);
Task<bool> UpdateDiscount(Coupon coupon);
Task<bool> DeleteDiscount(string productName);
}
}
After that, we can implement this interface with using NpgsqlConnection
objects.
Create Repository — DiscountRepository
namespace Basket.API.Repositories
{
public class DiscountRepository : IDiscountRepository
{
private readonly IConfiguration _configuration;
public DiscountRepository(IConfiguration configuration)
{
_configuration = configuration ?? throw new
ArgumentNullException(nameof(configuration));
}
public async Task<Coupon> GetDiscount(string productName)
{
using var connection = new
NpgsqlConnection(_configuration.GetValue<string>(“DatabaseSettings:Co
nnectionString”));
var coupon = await connection.QueryFirstOrDefaultAsync<Coupon>
(“SELECT * FROM Coupon WHERE ProductName = @ProductName”, new {
ProductName = productName });
if (coupon == null)
return new Coupon { ProductName = “No Discount”, Amount = 0,
Description = “No Discount Desc” };
return coupon;
}
public async Task<bool> CreateDiscount(Coupon coupon)
{
using var connection = new
NpgsqlConnection(_configuration.GetValue<string>(“DatabaseSettings:Co
nnectionString”));
var affected =
await connection.ExecuteAsync
(“INSERT INTO Coupon (ProductName, Description, Amount) VALUES
(@ProductName, @Description, @Amount)”,
new { ProductName = coupon.ProductName, Description =
coupon.Description, Amount = coupon.Amount });
if (affected == 0)
return false;
return true;
}
public async Task<bool> UpdateDiscount(Coupon coupon)
{
using var connection = new
NpgsqlConnection(_configuration.GetValue<string>(“DatabaseSettings:Co
nnectionString”));
var affected = await connection.ExecuteAsync
(“UPDATE Coupon SET ProductName=@ProductName, Description =
@Description, Amount = @Amount WHERE Id = @Id”,
new { ProductName = coupon.ProductName, Description =
coupon.Description, Amount = coupon.Amount, Id = coupon.Id });
if (affected == 0)
return false;
return true;
}
public async Task<bool> DeleteDiscount(string productName)
{
using var connection = new
NpgsqlConnection(_configuration.GetValue<string>(“DatabaseSettings:Co
nnectionString”));
var affected = await connection.ExecuteAsync(“DELETE FROM Coupon
WHERE ProductName = @ProductName”,
new { ProductName = productName });
if (affected == 0)
return false;
return true;
}
}
}
Basically we are using “NpgsqlConnection” object as a postgresql
connection. This will create connection for us with using connection string.
After that with using “connection” object and methods, we are using Dapper
sql commands. This will be the fastest way to use micro-orm to retrieve data
from databases.
Create DiscountController Class for Discount.API Microservice
First of all, We should create DiscountController into Controller class.
Controller + Startup.cs
[ApiController]
[Route(“api/v1/[controller]”)]
public class DiscountController : ControllerBase
{
private readonly IDiscountRepository _repository;
public DiscountController(IDiscountRepository repository)
{
_repository = repository ?? throw new
ArgumentNullException(nameof(repository));
}
[HttpGet(“{productName}”, Name = “GetDiscount”)]
[ProducesResponseType(typeof(Coupon), (int)HttpStatusCode.OK)]
public async Task<ActionResult<Coupon>> GetDiscount(string
productName)
{
var discount = await _repository.GetDiscount(productName);
return Ok(discount);
}
[HttpPost]
[ProducesResponseType(typeof(Coupon), (int)HttpStatusCode.OK)]
public async Task<ActionResult<Coupon>> CreateDiscount([FromBody]
Coupon coupon)
{
await _repository.CreateDiscount(coupon);
return CreatedAtRoute(“GetDiscount”, new { productName =
coupon.ProductName }, coupon);
}
[HttpPut]
[ProducesResponseType(typeof(Coupon), (int)HttpStatusCode.OK)]
public async Task<ActionResult<Coupon>> UpdateDiscount([FromBody]
Coupon coupon)
{
return Ok(await _repository.UpdateDiscount(coupon));
}
[HttpDelete(“{productName}”, Name = “DeleteDiscount”)]
[ProducesResponseType(typeof(void), (int)HttpStatusCode.OK)]
public async Task<ActionResult<bool>> DeleteDiscount(string
productName)
{
return Ok(await _repository.DeleteDiscount(productName));
}
}
we have injected IDiscountRepository object and use this object when
exposing apis.
You can see that we have exposed crud api methods over the
DiscountController.
Also you can see the anotations.
ProducesResponseType — provide to restrictions on api methods. produce
only including types of objects.
Route — provide to define custom routes. Mostly using if you have one more
the same http method.
httpget/put/post — send http methods.
Test and Run Discount Microservice
We are going to Test and Run Discount Microservice. Now the Discount
microservice Web API application ready to run.
Before running the application, configure the debug profile;
Right Click the project File and Select to Debug section.
Change Launch browser to swagger
Change the App URL to http://localhost:5002
Hit F5 on Discount.API project.
Exposed the Product APIs in our Catalog Microservices, you can test it over
the Swagger GUI.
You can also test it over the Postman as below way.
http://localhost:5002/swagger/index.html
See CRUD operations on Swagger
—Set productName
productName — IPhone X
productName — Samsung 10
Get Success :
{
“id”: 1,
“productName”: “IPhone X”,
“description”: “IPhone Discount”,
“value”: 150
}
Create
{
“productName”: “Huawei Plus”,
“description”: “test new product”,
“value”: 550
}
Update
{
“productName”: “Huawei Plus”,
“description”: “test update”,
“value”: 200
}
Delete
productName — IPhone X
productName — Samsung 10
productName — Huawei Plus
As you can see that, we have test our Discount microservices is working fine.
Get Udemy Course with discounted — Microservices Architecture and
Implementation on .NET.
Get the Source Code from AspnetRun Microservices Github
Follow Series of Microservices Articles
This is the introduction of the series. This will be the series of articles. You
can follow the series with below links.
• 0- Microservices Architecture on .NET with applying CQRS, Clean
Architecture and Event-Driven Communication
• 1- Microservice Using ASP.NET Core, MongoDB and Docker Container
• 2- Using Redis with ASP.NET Core, and Docker Container for Basket
Microservices
• 3- Using PostgreSQL and Dapper with ASP.NET and Docker Container for
Discount Microservices
• 4- Building Ocelot API Gateway Microservices with ASP.NET Core and
Docker Container
• 5- Microservices Event Driven Architecture with RabbitMQ and Docker
Container on .NET
• 6- CQRS and Event Sourcing in Event Driven Architecture of Ordering
Microservices
• 7- Microservices Cross-Cutting Concerns with Distributed Logging and
Microservices Resilience
• 8- Securing Microservices with IdentityServer4 with OAuth2 and
OpenID Connect fronted by Ocelot API Gateway
• 9- Using gRPC in Microservices for Building a high-performance
Interservice Communication with .Net 5
• 10-Deploying .Net Microservices to Azure Kubernetes Services(AKS) and
Automating with Azure DevOps
Microservices
Postgresql
Dapper
Pgadmin
Aspnetcore
Search
Write
Building Ocelot API Gateway
Microservices with ASP.NET Core
and Docker Container
Mehmet Ozkaya · Follow
Published in aspnetrun · 17 min read · May 15, 2020
126
Building Ocelot API Gateway Microservice on .Net platforms which used Asp.Net
Web Application, Docker, Ocelot. Test microservice with applying Gateway Routing
Pattern.
Introduction
In this article we will show how to perform API Gateway microservices
operations on ASP.NET Core Web application using Ocelot Api Gateway
library.
By the end of the section, we will have a empty Web project which
implemented Ocelot API Gateway routing operations over the Catalog,
Discount, Basket and Ordering microservices.
Look at the final appearance of application.
You’ll learn how to Create Ocelot API Gateway microservice which includes;
• Implement API Gateways with Ocelot
• ASP.NET Core Empty Web application
• Ocelot Routing, UpStream, DownStream
• Ocelot Configuration
• Sample microservices/containers to reroute through the API Gateways
• Run multiple different API Gateway/BFF container types
• The Gateway Aggregation Pattern in Shopping.Aggregator
• Containerize Ocelot Microservices using Docker Compose
Background
You can follow the previous article which explains overall microservice
architecture of this example.
Check for the previous article which explained overall microservice
architecture of this repository.
We will focus on Api Gateway microservice from that overall e-commerce
microservice architecture.
Step by Step Development w/ Udemy Course
Get Udemy Course with discounted — Microservices Architecture and
Implementation on .NET.
Source Code
Get the Source Code from AspnetRun Microservices Github — Clone or fork
this repository, if you like don’t forget the star. If you find or ask anything you
can directly open issue on repository.
Prerequisites
• Install the .NET Core 5 or above SDK
• Install Visual Studio 2019 v16.x or above
• Docker Desktop
The Gateway Routing pattern
Gateway Routing pattern main objective is Route requests to multiple
services using a single endpoint. This pattern is useful when you wish to
expose multiple services on a single endpoint and route to the appropriate
service based on the request.
When a client needs to consume multiple services, setting up a separate
endpoint for each service and having the client manage each endpoint can
be challenging. For example, an e-commerce application might provide
services such as search, reviews, cart, checkout, and order history.
The solution is to place a gateway in front of a set of applications, services, or
deployments. Use application Layer 7 routing to route the request to the
appropriate instances. With this pattern, the client application only needs to
know about and communicate with a single endpoint.
If a service is consolidated or decomposed, the client does not necessarily
require updating. It can continue making requests to the gateway, and only
the routing changes. So this pattern is the ancestor of API Gateway Pattern.
Api Gateway Design Pattern
We can simply define our internal services as the proxy layer hanging out.
When the user throws a request from the application, he doesn’t know what’s
going on behind. Api-gateway may be going to dozens of microservices
inside to respond to this request. Right here, we have the api-gateway pattern
to figure out what we need about how to fetch and aggregate data.
This pattern is a service that provides a single-entry point for certain groups
of microservices. It’s similar to the Facade pattern from object-oriented
design, but in this case, it’s part of a distributed system. The API Gateway
pattern is also sometimes known as the “Backend For Frontend” (BFF)
because you build it while thinking about the needs of the client app.
Therefore, the API gateway sits between the client apps and the
microservices. It acts as a reverse proxy, routing requests from clients to
services. It can also provide other cross-cutting features such as
authentication, SSL termination, and cache.
Backend for Frontend Pattern — BFF
When splitting the API Gateway tier into multiple API Gateways, if your
application has multiple client apps, that can be a primary pivot when
identifying the multiple API Gateways types, so that you can have a different
facade for the needs of each client app.
This case is a pattern named “Backend for Frontend” (BFF) where each API
Gateway can provide a different API tailored for each client app type,
possibly even based on the client form factor by implementing specific
adapter code which underneath calls multiple internal microservices.
Main features in the API Gateway
Reverse proxy or gateway forwarding, API Gateway offers a reverse proxy for
forwarding or forwarding requests (layer 7 routing, usually HTTP requests)
to endpoints of internal microservices.
As part of the gateway model, request aggregation, you can aggregate
multiple client requests into a single client request, often targeting multiple
internal microservices.
With this approach, the client application sends a single request to the API
Gateway, which sends several requests to internal microservices and then
collects the results and sends everything back to the client application.
Common Feature;
Cross-cutting concerns or gateway offloading
Authentication and authorization
Service discovery integration
Response caching
Retry policies, circuit breaker, and QoS
Rate limiting and throttling
Load balancing
Logging, tracing, correlation
Headers, query strings, and claims transformation
IP allowlisting
Ocelot API Gateway
Ocelot is basically a set of middlewares that you can apply in a specific order.
Ocelot is a lightweight API Gateway, recommended for simpler approaches.
Ocelot is an Open Source .NET Core-based API Gateway especially made for
microservices architectures that need unified points of entry into their
systems. It’s lightweight, fast, and scalable and provides routing and
authentication among many other features.
The main reason to choose Ocelot for our reference application is because
Ocelot is a .NET Core lightweight API Gateway that you can deploy into the
same application deployment environment where you’re deploying your
microservices/containers, such as a Docker Host, Kubernetes, etc. And since
it’s based on .NET Core, it’s cross-platform allowing you to deploy on Linux
or Windows.
Ocelot is designed to work with ASP.NET Core only. You install Ocelot and its
dependencies in your ASP.NET Core project with Ocelot’s NuGet package,
from Visual Studio.
Analysis & Design
This project will be the REST APIs which basically perform Routing
operations on Catalog, Basket and Ordering microservices.
We should define our Ocelot API Gateway use case analysis.
Our main use cases;
• Route Catalog APIs with /Catalog path
• Route Basket APIs with / Basket path
• Route Discount APIs with / Discount path
• Route Ordering APIs with /Ordering path
Along with this we should design our APIs according to REST perspective.
Starting Our Project
Create new web application with visual studio.
First, open File -> New -> Project. Select ASP.NET Core Web Application, give
your project a name and select OK.
In the next window, select .Net Core and ASP.Net Core latest version and
select Web API and then uncheck “Configure for HTTPS” selection and click
OK. This is the default Web API template selected. Unchecked for https
because we don’t use https for our api’s now.
Add New Blank Web project under below location and name;
src/ApiGateway/APIGateway
Library & Frameworks
For API Gateway microservices, we have to libraries in our Nuget Packages,
• Ocelot — API Gateway library
Configure Ocelot in Startup.cs
In order to configure and use Ocelot in Asp.Net Core project, we should
define Ocelot methods into Startup.cs.
Go to Class -> Startup.cs
public void ConfigureServices(IServiceCollection services)
{
services.AddOcelot();
}
// This method gets called by the runtime. Use this method to
configure the HTTP request pipeline.
public async void Configure(IApplicationBuilder app,
IWebHostEnvironment env)
{
if (env.IsDevelopment())
{
app.UseDeveloperExceptionPage();
}
app.UseRouting();
app.UseEndpoints(endpoints =>
{
endpoints.MapControllers();
});
//ocelot
await app.UseOcelot();
}
Configuration Json File Definition of Ocelot
In order to use routing function of Ocelot, we should give the configuration
json file when Asp.Net Web application start.
Go to Class -> Program.cs
public static IHostBuilder CreateHostBuilder(string[] args) =>
Host.CreateDefaultBuilder(args)
.ConfigureAppConfiguration((hostingContext, config)
=>
{
config.AddJsonFile($"ocelot.
{hostingContext.HostingEnvironment.EnvironmentName}.json", true,
true);
})
.ConfigureWebHostDefaults(webBuilder =>
{
webBuilder.UseStartup<Startup>();
});
We have added configuration file according to Environment name.
Since I changed the environment variable as a Local, application will pick
the ocelot.Local.json configurations.
Ocelot.Local.json File Routing API
In order to use routing we sould create Ocelot.Local.json and put json
objects.
The important part here is that each element we define in the Routes series
represents a service.
In the DownstreamPathTemplate field, we define the url directory of the api
application. In UpstreamPathTemplate, we specify which path the user
writes to this api url.
Create Ocelot.Local.json file.
{
"Routes": [
//Catalog API
{
"DownstreamPathTemplate": "/api/v1/Catalog",
"DownstreamScheme": "http",
"DownstreamHostAndPorts": [
{
"Host": "localhost",
"Port": "8000"
}
],
"UpstreamPathTemplate": "/Catalog",
"UpstreamHttpMethod": [ "GET", "POST", "PUT" ]
},
{
"DownstreamPathTemplate": "/api/v1/Catalog/{id}",
"DownstreamScheme": "http",
"DownstreamHostAndPorts": [
{
"Host": "localhost",
"Port": "8000"
}
],
"UpstreamPathTemplate": "/Catalog/{id}",
"UpstreamHttpMethod": [ "GET", "DELETE" ]
},
{
"DownstreamPathTemplate": "/api/v1/Catalog/
GetProductByCategory/{category}",
"DownstreamScheme": "http",
"DownstreamHostAndPorts": [
{
"Host": "localhost",
"Port": "8000"
}
],
"UpstreamPathTemplate": "/Catalog/GetProductByCategory/
{category}",
"UpstreamHttpMethod": [ "GET" ]
},
//Basket API
{
"DownstreamPathTemplate": "/api/v1/Basket/{userName}",
"DownstreamScheme": "http",
"DownstreamHostAndPorts": [
{
"Host": "localhost",
"Port": "8001"
}
],
"UpstreamPathTemplate": "/Basket/{userName}",
"UpstreamHttpMethod": [ "GET", "DELETE" ]
},
{
"DownstreamPathTemplate": "/api/v1/Basket",
"DownstreamScheme": "http",
"DownstreamHostAndPorts": [
{
"Host": "localhost",
"Port": "8001"
}
],
"UpstreamPathTemplate": "/Basket",
"UpstreamHttpMethod": [ "POST" ]
},
{
"DownstreamPathTemplate": "/api/v1/Basket/Checkout",
"DownstreamScheme": "http",
"DownstreamHostAndPorts": [
{
"Host": "localhost",
"Port": "8001"
}
],
"UpstreamPathTemplate": "/Basket/Checkout",
"UpstreamHttpMethod": [ "POST" ]
},
//Discount API
{
"DownstreamPathTemplate": "/api/v1/Discount/{productName}",
"DownstreamScheme": "http",
"DownstreamHostAndPorts": [
{
"Host": "localhost",
"Port": "8002"
}
],
"UpstreamPathTemplate": "/Discount/{productName}",
"UpstreamHttpMethod": [ "GET", "DELETE" ]
},
{
"DownstreamPathTemplate": "/api/v1/Discount",
"DownstreamScheme": "http",
"DownstreamHostAndPorts": [
{
"Host": "localhost",
"Port": "8002"
}
],
"UpstreamPathTemplate": "/Discount",
"UpstreamHttpMethod": [ "PUT", "POST" ]
},
//Order API
{
"DownstreamPathTemplate": "/api/v1/Order/{userName}",
"DownstreamScheme": "http",
"DownstreamHostAndPorts": [
{
"Host": "localhost",
"Port": "8004"
}
],
"UpstreamPathTemplate": "/Order/{userName}",
"UpstreamHttpMethod": [ "GET" ]
}
],
"GlobalConfiguration": {
"BaseUrl": "http://localhost:5010"
}
}
These routes definition provide to open api to outside of the system and
redirect these request into internal api calls.
Of course these requests will have token so Ocelot api gateway also carry this
token when calling to internal systems.
Also as you can see that we had summarized the api calls remove api path
and use only /Catalog path when exposing apis for the client application.
We have Develop and configured our Ocelot Api Gateway Microservices with
seperating environment configurations.
Run Application
Now the API Gateway microservice Web application ready to run.
Before we start, make sure that docker-compose microservices running
properly. Start Docker environment;
docker-compose -f docker-compose.yml -f docker-compose.override.yml
up -d
docker-compose -f docker-compose.yml -f docker-compose.override.yml
down
Check docker urls :
Catalog
http://localhost:8000/swagger/index.html
Basket
http://localhost:8001/swagger/index.html
Discount
http://localhost:8002/swagger/index.html
Ordering
http://localhost:8004/swagger/index.html
Before running the application, configure the debug profile;
Right Click the project File and Select to Debug section.
Change the App URL to http://localhost:7000
Hit F5 on APIGateway project.
Exposed the APIGateways in our Microservices, you can test it over the
chrome or Postman.
Rate Limiting in Ocelot Api Gateway with Configuring
Ocelot.json File
We are going to do Rate Limiting in Ocelot Api Gateway with Configuring
Ocelot.json File.
Rate Limiting Ocelot
https://ocelot.readthedocs.io/en/latest/features/ratelimiting.html
— Example configuration
“RateLimitOptions”: {
“ClientWhitelist”: [],
“EnableRateLimiting”: true,
“Period”: “5s”,
“PeriodTimespan”: 1,
“Limit”: 1
}
After that we can try on Catalog route configuration.
EDIT Get Catalog
“Routes”: [
//Catalog API
{
“DownstreamPathTemplate”: “/api/v1/Catalog”,
“DownstreamScheme”: “http”,
“DownstreamHostAndPorts”: [
{
“Host”: “localhost”,
“Port”: “8000”
}
],
“UpstreamPathTemplate”: “/Catalog”,
“UpstreamHttpMethod”: [ “GET”, “POST”, “PUT” ],
“RateLimitOptions”: {
“ClientWhitelist”: [],
“EnableRateLimiting”: true,
“Period”: “5s”,
“PeriodTimespan”: 1,
“Limit”: 1
}
},
— Test on Postman
GET
http://localhost:5010/Catalog
Second Call ERROR
API calls quota exceeded! maximum admitted 1 per 5s.
Response Caching in Ocelot Api Gateway with Configuring
Ocelot.json File
We are going to do response Caching in Ocelot Api Gateway with Configuring
Ocelot.json File.
Response Caching
https://ocelot.readthedocs.io/en/latest/features/caching.html
— Example configuration
“FileCacheOptions”: { “TtlSeconds”: 30 }
After that, we should add required nuget package;
Add Nuget Package
Install-Package Ocelot.Cache.CacheManager
Modify DI Ocelot
Startup.cs
public void ConfigureServices(IServiceCollection services)
{
services.AddOcelot().AddCacheManager(settings =>
settings.WithDictionaryHandle()); — CHANGED !!
}
Add Configuration ocelot.json
— how many seconds keep cache
“FileCacheOptions”: { “TtlSeconds”: 30 }
— UPDATE Catalog API
“Routes”: [
//Catalog API
{
“DownstreamPathTemplate”: “/api/v1/Catalog”,
“DownstreamScheme”: “http”,
“DownstreamHostAndPorts”: [
{
“Host”: “localhost”,
“Port”: “8000”
}
],
“UpstreamPathTemplate”: “/Catalog”,
“UpstreamHttpMethod”: [ “GET”, “POST”, “PUT” ],
“FileCacheOptions”: { “TtlSeconds”: 30 } — — — — — — ADDED
},
30 second cache.
— Second call will comes from cache.
As you can see that, we have perform response caching in Ocelot Api
Gateway with Configuring Ocelot.json File.
Configure Ocelot Json For Docker Development Environment in
Ocelot Api Gateway
We are going to Configure Ocelot Json For Docker Development
Environment in Ocelot Api Gateway.
Before I am adding to ocelot.Development.json file, let me clarify our
environments.
— When we create any aspnet web application, you can see the environment
value is Local
— Change env is Development
Right Click OcelotApiGw
Properties
Debug
ASPNETCORE_ENVIRONMENT = Local
change
ASPNETCORE_ENVIRONMENT = Development
Also if we check the docker-compose override file, for every configuration
we set ASPNETCORE_ENVIRONMENT=Development.
ocelotapigw:
container_name: ocelotapigw
environment:
— ASPNETCORE_ENVIRONMENT=Development
That’s why we can say that our Development environment should be full
docker environment with docker names of the containers.
We only configured for our local environment and tested successfully.
So now we are going to configure Ocelot for Docker Development
environment in order to Run and Debug Ocelot Api Gw on docker
environment.
ocelot.Development.json
{
“Routes”: [
//Catalog API
{
“DownstreamPathTemplate”: “/api/v1/Catalog”,
“DownstreamScheme”: “http”,
“DownstreamHostAndPorts”: [
{
“Host”: “catalog.api”,
“Port”: “80”
}
],
“UpstreamPathTemplate”: “/Catalog”,
“UpstreamHttpMethod”: [ “GET”, “POST”, “PUT” ],
“FileCacheOptions”: { “TtlSeconds”: 30 }
},
{
“DownstreamPathTemplate”: “/api/v1/Catalog/{id}”,
“DownstreamScheme”: “http”,
“DownstreamHostAndPorts”: [
{
“Host”: “catalog.api”,
“Port”: “80”
}
],
“UpstreamPathTemplate”: “/Catalog/{id}”,
“UpstreamHttpMethod”: [ “GET”, “DELETE” ]
},
{
“DownstreamPathTemplate”: “/api/v1/Catalog/GetProductByCategory/
{category}”,
“DownstreamScheme”: “http”,
“DownstreamHostAndPorts”: [
{
“Host”: “catalog.api”,
“Port”: “80”
}
],
“UpstreamPathTemplate”: “/Catalog/GetProductByCategory/{category}”,
“UpstreamHttpMethod”: [ “GET” ]
},
//Basket API
{
“DownstreamPathTemplate”: “/api/v1/Basket/{userName}”,
“DownstreamScheme”: “http”,
“DownstreamHostAndPorts”: [
{
“Host”: “basket.api”,
“Port”: “80”
}
],
“UpstreamPathTemplate”: “/Basket/{userName}”,
“UpstreamHttpMethod”: [ “GET”, “DELETE” ]
},
{
“DownstreamPathTemplate”: “/api/v1/Basket”,
“DownstreamScheme”: “http”,
“DownstreamHostAndPorts”: [
{
“Host”: “basket.api”,
“Port”: “80”
}
],
“UpstreamPathTemplate”: “/Basket”,
“UpstreamHttpMethod”: [ “POST” ]
},
{
“DownstreamPathTemplate”: “/api/v1/Basket/Checkout”,
“DownstreamScheme”: “http”,
“DownstreamHostAndPorts”: [
{
“Host”: “basket.api”,
“Port”: “80”
}
],
“UpstreamPathTemplate”: “/Basket/Checkout”,
“UpstreamHttpMethod”: [ “POST” ],
“RateLimitOptions”: {
“ClientWhitelist”: [],
“EnableRateLimiting”: true,
“Period”: “3s”,
“PeriodTimespan”: 1,
“Limit”: 1
}
},
//Discount API
{
“DownstreamPathTemplate”: “/api/v1/Discount/{productName}”,
“DownstreamScheme”: “http”,
“DownstreamHostAndPorts”: [
{
“Host”: “discount.api”,
“Port”: “80”
}
],
“UpstreamPathTemplate”: “/Discount/{productName}”,
“UpstreamHttpMethod”: [ “GET”, “DELETE” ]
},
{
“DownstreamPathTemplate”: “/api/v1/Discount”,
“DownstreamScheme”: “http”,
“DownstreamHostAndPorts”: [
{
“Host”: “discount.api”,
“Port”: “80”
}
],
“UpstreamPathTemplate”: “/Discount”,
“UpstreamHttpMethod”: [ “PUT”, “POST” ]
},
//Order API
{
“DownstreamPathTemplate”: “/api/v1/Order/{userName}”,
“DownstreamScheme”: “http”,
“DownstreamHostAndPorts”: [
{
“Host”: “ordering.api”,
“Port”: “80”
}
],
“UpstreamPathTemplate”: “/Order/{userName}”,
“UpstreamHttpMethod”: [ “GET” ]
}
],
“GlobalConfiguration”: {
“BaseUrl”: “http://localhost:5010”
}
}
As you can see that, we have Configured Ocelot Json For Docker
Development Environment in Ocelot Api Gateway.
Run Application on Docker with Database
Since here we developed ASP.NET Core Web project for APIGateway
microservices. Now it’s time to make docker the APIGateway project with
our existing microservices image.
Add Docker Compose and Dockerfile
Normally you can add only Dockerfile for make dokerize the Web API
application but we will integrate our API project with Ocelot docker image,
so we should create docker-compose file with Dockerfile of API project.
Right Click to Project -> Add -> ..Container Orchestration Support
Continue with default values.
Dockerfile and docker-compose files are created.
Docker-compose.yml is a command-line file used during development and
testing, where necessary definitions are made for multi-container running
applications.
Docker-compose.yml
version: ‘3.4’
services:
ocelotapigw:
image: ${DOCKER_REGISTRY-}ocelotapigw
build:
context: .
dockerfile: ApiGateways/OcelotApiGw/Dockerfile
Docker-compose.override.yml
version: ‘3.4’services:
ocelotapigw:
container_name: ocelotapigw
environment:
- ASPNETCORE_ENVIRONMENT=Development
depends_on:
- catalog.api
- basket.api
- discount.api
- ordering.api
ports:
- "8010:80"
Basically in docker-compose.yml file, created one image for apigateway.
Run below command on top of project folder which include dockercompose.yml files.
docker-compose -f docker-compose.yml -f docker-compose.override.yml
up –build
That’s it!
You can check microservices as below urls :
Let me test Ocelot Api Gw Routing Features in docker environment
TEST OVER OCELOT
Test
Open Postman
Catalog
GET
http://localhost:8010/Catalog
http://localhost:8010/Catalog/6022a3879745eb1bf118d6e2
http://localhost:8010/Catalog/GetProductByCategory/Smart Phone
Basket
GET
http://localhost:8010/Basket/swn
POST
http://localhost:8010/Basket
{
“UserName”: “swn”,
“Items”: [
{
“Quantity”: 2,
“Color”: “Red”,
“Price”: 33,
“ProductId”: “5”,
“ProductName”: “5”
},
{
“Quantity”: 1,
“Color”: “Blue”,
“Price”: 55,
“ProductId”: “3”,
“ProductName”: “5”
}
]
}
POST
http://localhost:8010/Basket/Checkout
{
“userName”: “swn”,
“totalPrice”: 0,
“firstName”: “swn”,
“lastName”: “swn”,
“emailAddress”: “string”,
“addressLine”: “string”,
“country”: “string”,
“state”: “string”,
“zipCode”: “string”,
“cardName”: “string”,
“cardNumber”: “string”,
“expiration”: “string”,
“cvv”: “string”,
“paymentMethod”: 1
}
See from Rabbit Management Dashboard
http://localhost:15672
DELETE
http://localhost:8010/Basket/swn
Discount
GET
http://localhost:8010/Discount/IPhone X
DELETE
http://localhost:8010/Discount/IPhone X
Order
GET
http://localhost:8010/Order/swn
As you can see that, we have tested Ocelot API Gw with Routing features on
docker environment.
Develop Shopping.Aggregator microservices with Applying
Gateway Aggregation Pattern
We are going to Develop Shopping.Aggregator microservices with Applying
Gateway Aggregation Pattern.
Shopping.Aggregator Microservice cover the;
• Aggregate multiple client requests using HTTP Client Factory
• Targeting multiple internal microservices into a single client request
• Client app sends a single request to the API Gateway that dispatches
several requests to the internal microservices
• Then aggregates the results and sends everything back to the client app
We are going to routing operations over the Catalog, Basket and Ordering
microservices. Client only send the username of 1 api exposing from
shopping.aggragetor microservices.
Reduce chattiness between the client apps and the backend API
Implement the Gateway aggregation pattern in Shopping.Aggregator
Similar to Custom api Gateway implementation.
The Gateway Aggregation pattern
Use a gateway to aggregate multiple individual requests into a single request.
This pattern is useful when a client must make multiple calls to different
backend systems to perform an operation.
To perform a single task, a client may have to make multiple calls to various
backend services. An application that relies on many services to perform a
task must expend resources on each request. When any new feature or
service is added to the application, additional requests are needed, further
increasing resource requirements and network calls.
This chattiness between a client and a backend can adversely impact the
performance and scale of the application. Microservice architectures have
made this problem more common, as applications built around many
smaller services naturally have a higher amount of cross-service calls.
As a Solution, use a gateway to reduce chattiness between the client and the
services. The gateway receives client requests, dispatches requests to the
various backend systems, and then aggregates the results and sends them
back to the requesting client.
As you can see in this picture, We are going to develop Shopping.Aggregator
Microservices with implementing Gateway Aggregation pattern.
This Shopping.Aggregator Microservices expose only 1 api to the client
applications with taking user Name information. And Aggregate multiple
client requests with consuming Catalog, Basket and Ordering internal
microservices.
Client app sends a single request to the API Gateway that dispatches several
requests to the internal microservices. Then aggregates the results and sends
everything back to the client app. We are going to routing operations over
the Catalog, Basket and Ordering microservices.
Client only send the username of 1 api exposing from shopping.aggragetor
microservices.
This will Reduce chattiness between the client apps and the backend API.
Developing Service Classes for Consuming Internal
Microservices in Shopping.Aggreation Microservice
We are going to Develop Service Classes for Consuming Internal
Microservices in Shopping.Aggreation.
Before we start, remember that we have added url configurations in
appsettings json file.
Addding Configurations
“ApiSettings”: {
“CatalogUrl”: “http://localhost:8001",
“BasketUrl”: “http://localhost:8002",
“OrderingUrl”: “http://localhost:8004"
},
Create Services Folder — for consuming apis
Create folder
Services
Add service interfaces;
ICatalogService
public interface ICatalogService
{
Task<IEnumerable<CatalogModel>> GetCatalog();
Task<IEnumerable<CatalogModel>> GetCatalogByCategory(string
category);
Task<CatalogModel> GetCatalog(string id);
}
IBasketService
public interface IBasketService
{
Task<BasketModel> GetBasket(string userName);
}
IOrderService
public interface IOrderService
{
Task<IEnumerable<OrderResponseModel>> GetOrdersByUserName(string
userName);
}
Add service Implementation classes but no implementation only write
methods with empty bodies.
CatalogService
public class CatalogService : ICatalogService
{
private readonly HttpClient _client;
public CatalogService(HttpClient client)
{
_client = client ?? throw new ArgumentNullException(nameof(client));
}
In this service classes we need to consume apis. So we need HttpClient. In
order to use it, inject HttpClient object with Client Factory in Startup.cs
Register HttpClient to AspNet DI
We are going to register HttpClient definitions in aspnet built-in dependency
injection.
With AddHttpClient method we can pass the type objects as a generic tpyes.
By this way httpclient factory manange httpclient creation operation inside
of this types.
This is called type-based HttpClient Factory registiration.
Startup.cs;
services.AddHttpClient<ICatalogService, CatalogService>(c =>
c.BaseAddress = new Uri(Configuration[“ApiSettings:CatalogUrl”]));
services.AddHttpClient<IBasketService, BasketService>(c =>
c.BaseAddress = new Uri(Configuration[“ApiSettings:BasketUrl”]));
services.AddHttpClient<IOrderService, OrderService>(c =>
c.BaseAddress = new Uri(Configuration[“ApiSettings:OrderingUrl”]));
We have registered 3 microservices integrations with giving base addresses
We have registered service classes and http client object will manage by
factory by this type configuration dependency.
Now ready to Implementation Service Classes;
CatalogService
public class CatalogService : ICatalogService
{
private readonly HttpClient _client;
public CatalogService(HttpClient client)
{
_client = client ?? throw new ArgumentNullException(nameof(client));
}
public async Task<IEnumerable<CatalogModel>> GetCatalog()
{
var response = await _client.GetAsync(“/api/v1/Catalog”);
return await response.ReadContentAs<List<CatalogModel>>();
}
public async Task<CatalogModel> GetCatalog(string id)
{
var response = await _client.GetAsync($”/api/v1/Catalog/{id}”);
return await response.ReadContentAs<CatalogModel>();
}
public async Task<IEnumerable<CatalogModel>>
GetCatalogByCategory(string category)
{
var response = await _client.GetAsync($”/api/v1/Catalog/
GetProductByCategory/{category}”);
return await response.ReadContentAs<List<CatalogModel>>();
}
}
BasketService
public class BasketService : IBasketService
{
private readonly HttpClient _client;
public BasketService(HttpClient client)
{
_client = client ?? throw new ArgumentNullException(nameof(client));
}
public async Task<BasketModel> GetBasket(string userName)
{
var response = await _client.GetAsync($”/api/v1/Basket/{userName}”);
return await response.ReadContentAs<BasketModel>();
}
}
OrderService
public class OrderService : IOrderService
{
private readonly HttpClient _client;
public OrderService(HttpClient client)
{
_client = client ?? throw new ArgumentNullException(nameof(client));
}
public async Task<IEnumerable<OrderResponseModel>>
GetOrdersByUserName(string userName)
{
var response = await _client.GetAsync($”/api/v1/Order/{userName}”);
return await response.ReadContentAs<List<OrderResponseModel>>();
}
}
After developed Service classes, we should expose api with creating
Controller class.
Develop Controller for Shopping Aggregator
ShoppingController
[ApiController]
[Route(“api/v1/[controller]”)]
public class ShoppingController : ControllerBase
{
private readonly ICatalogService _catalogService;
private readonly IBasketService _basketService;
private readonly IOrderService _orderService;
public ShoppingController(ICatalogService catalogService,
IBasketService basketService, IOrderService orderService)
{
_catalogService = catalogService ?? throw new
ArgumentNullException(nameof(catalogService));
_basketService = basketService ?? throw new
ArgumentNullException(nameof(basketService));
_orderService = orderService ?? throw new
ArgumentNullException(nameof(orderService));
}
[HttpGet(“{userName}”, Name = “GetShopping”)]
[ProducesResponseType(typeof(ShoppingModel),
(int)HttpStatusCode.OK)]
public async Task<ActionResult<ShoppingModel>> GetShopping(string
userName)
{
var basket = await _basketService.GetBasket(userName);
foreach (var item in basket.Items)
{
var product = await _catalogService.GetCatalog(item.ProductId);
// set additional product fields
item.ProductName = product.Name;
item.Category = product.Category;
item.Summary = product.Summary;
item.Description = product.Description;
item.ImageFile = product.ImageFile;
}
var orders = await _orderService.GetOrdersByUserName(userName);
var shoppingModel = new ShoppingModel
{
UserName = userName,
BasketWithProducts = basket,
Orders = orders
};
return Ok(shoppingModel);
}
}
As you can see that, first we call basket microservices, in order to get basket
information with a given username.
• After that we call catalog microservices for every basket item into the
basket and get product detail information and enrich our aggregator
basket item data with product informations.
• Lastly we call Order microservices to get user existing orders
• Finally return to aggregated ShoppningModel to the client.
You can create your own BFF — backend for frontend microservices for
aggreagotr or api gateway operations.
These are 2 design patterns
• Gateway Aggregator
• Gateway Routing
So that means you can create your custom api gateway with the same steps of
Shopping.Aggreation Microservices.
Get Udemy Course with discounted — Microservices Architecture and
Implementation on .NET.
Get the Source Code from AspnetRun Microservices Github
Follow Series of Microservices Articles
This is the introduction of the series. This will be the series of articles. You
can follow the series with below links.
• 0- Microservices Architecture on .NET with applying CQRS, Clean
Architecture and Event-Driven Communication
• 1- Microservice Using ASP.NET Core, MongoDB and Docker Container
• 2- Using Redis with ASP.NET Core, and Docker Container for Basket
Microservices
• 3- Using PostgreSQL and Dapper with ASP.NET and Docker Container for
Discount Microservices
• 4- Building Ocelot API Gateway Microservices with ASP.NET Core and
Docker Container
• 5- Microservices Event Driven Architecture with RabbitMQ and Docker
Container on .NET
• 6- CQRS and Event Sourcing in Event Driven Architecture of Ordering
Microservices
• 7- Microservices Cross-Cutting Concerns with Distributed Logging and
Microservices Resilience
• 8- Securing Microservices with IdentityServer4 with OAuth2 and
OpenID Connect fronted by Ocelot API Gateway
• 9- Using gRPC in Microservices for Building a high-performance
Interservice Communication with .Net 5
• 10-Deploying .Net Microservices to Azure Kubernetes Services(AKS) and
Automating with Azure DevOps
References
https://docs.microsoft.com/en-us/dotnet/architecture/microservices/multicontainer-microservice-net-applications/implement-api-gateways-withocelot
https://docs.microsoft.com/en-us/dotnet/architecture/microservices/
architect-microservice-container-applications/direct-client-to-microservicecommunication-versus-the-api-gateway-pattern
https://www.youtube.com/watch?v=hlUGZ6Hmv6s
https://codewithmukesh.com/blog/microservice-architecture-in-aspnet-core/
https://www.youtube.com/watch?v=UsoH5cqE1OA
https://myview.rahulnivi.net/api-gateway-envoy-docker/
https://code-maze.com/api-gateway-pattern-dotnet-encapsulatemicroservices/
https://medium.com/streamwriter/api-gateway-aspnet-core-a46ef259dc54
https://docs.microsoft.com/en-us/aspnet/core/fundamentals/http-requests?
view=aspnetcore-5.0
https://docs.microsoft.com/en-us/dotnet/architecture/microservices/
implement-resilient-applications/use-httpclientfactory-to-implementresilient-http-requests
https://stackoverflow.com/questions/40027299/where-is-the-postasjsonasyncmethod-in-asp-net-core
Search
Write
Microservices Event Driven
Architecture with RabbitMQ and
Docker Container on .NET
Mehmet Ozkaya · Follow
Published in aspnetrun · 14 min read · May 19, 2020
121
1
Building Microservices Async Communication w/ RabbitMQ & MassTransit
for Checkout Order use cases Between Basket-Ordering Microservices with docker-
compose.
Introduction
In this article we will show how to perform RabbitMQ connection on Basket
and Ordering microservices when producing and consuming events.
Developing Microservices Async Communication w/ RabbitMQ &
MassTransit for Checkout Order use cases Between Basket-Ordering
Microservices.
By the end of the article, we will have a library which provide to
communication along Basket and Ordering microservices with creating a
EventBus.Messages Common Class Library project.
The event bus implementation with RabbitMQ & MassTransit that
microservices publish events, and receive events, as shown in below figure.
You’ll learn how to Create basic Event Bus with RabbitMQ & MassTransit
which includes;
• Async Microservices Communication with RabbitMQ Message-Broker
Service
• Using RabbitMQ Publish/Subscribe Topic Exchange Model
• Using MassTransit for abstraction over RabbitMQ Message-Broker
system
• Publishing BasketCheckout event queue from Basket microservices
and Subscribing this event from Ordering microservices
• Create RabbitMQ EventBus.Messages Common Class Library and add
references Microservices
• Containerize RabbitMQ Message Queue system with Basket and Ordering
microservices using Docker Compose
Background
You can follow the previous article which explains overall microservice
architecture of this example.
Check for the previous article which explained overall microservice
architecture of this repository.
We will focus on Api Gateway microservice from that overall e-commerce
microservice architecture.
Step by Step Development w/ Udemy Course
Get Udemy Course with discounted — Microservices Architecture and
Implementation on .NET.
Source Code
Get the Source Code from AspnetRun Microservices Github — Clone or fork
this repository, if you like don’t forget the star. If you find or ask anything you
can directly open issue on repository.
Prerequisites
• Install the .NET Core 5 or above SDK
• Install Visual Studio 2019 v16.x or above
• Docker Desktop
Pub/Sub RabbitMQ Architecture
We will Analysis and Architecting of Microservices Async Communication
w/ RabbitMQ & MassTransit.
We will create RabbitMQ connection on Basket and Ordering microservices
when producing and consuming events.
Basket microservice which includes;
— Publish BasketCheckout Queue with using MassTransit and RabbitMQ
Ordering Microservice which includes;
— Consuming RabbitMQ BasketCheckout event queue with using
MassTransit-RabbitMQ Configuration
Publisher/Subscriber of BasketCheckout Event w/ Basket and
Ordering Microservices
Here is the another view of Publisher/Subscriber of BasketCheckout Event.
This is the E2E use case of BasketCheckout event.
1- BasketCheckout command will comes from the client application
2- Basket microservices perform their operations like removing basket in
redis database, because this is going to be an order
3- Basket microservices publish BasketCheckout event to the RabbitMQ with
using MassTransit
4- This queue typically implemented with topic messaging thecnology
So Subscriber microservices, in our case it will be Ordering microservices
will consume this event and Create an order in their sql server database.
By the end of the section, we will have a library which provide to
communication along Basket and Ordering microservices with creating a
class library project.
The event bus implementation with RabbitMQ that microservices publish
events, and receive events.
Before we start to developments, we should understand the microservices
communications and RabbitMQ.
Microservices Communication Types Request-Driven or EventDriven Architecture
Here you can see the microservices communication types.
Microservices communications can be divide by 3;
Request-Driven Architecture
Event-Driven Architecture
Hybrid Architecture
Request-Driven Architecture
In the request-response based approach, services communicate using HTTP
or RPC. In most cases, this is done using REST HTTP calls. But we have also
developed that gRPC communication with Basket and Discount
microservices. That communication was Request-Driven Architecture.
Benefits
There is a clear control of the flow, looking at the code of the orchestrator,
we can determine the sequence of the actions.
Tradeoffs
If one of the dependent services is down, there is a high chance to exclude
calls to the other services.
Event-Driven Architecture
This communication type microservices dont call each other, instead of that
they created events and consume events from message broker systems in an
async way. In this section we will follow this type of communication for
Basket microservice
Publish BasketCheckout Queue with using MassTransit and RabbitMQ
Ordering Microservice
Consuming RabbitMQ BasketCheckout event queue with using MassTransit-
RabbitMQ Configuration
Benefits
The producer service of the events does not know about its consumer
services. On the other hand, the consumers also do not necessarily know
about the producer. As a result, services can deploy and maintain
independently. This is a key requirement to build loosely coupled
microservices.
Tradeoffs
There is no clear central place (orchestrator) defining the whole flow.
And the last one is using hybrid architecture. So it means that depending on
your custom scenario, you can pick one of this type of communication and
perform it.
So in our reference application, we also choose Request-Driven Architecture
when communication Basket and Discount with gRPC calls.
And now we are going to use Event-Driven Architecture for producing and
consuming basket checkout event between Basket and Ordering
microservices.
RabbitMQ
RabbitMQ is a message queuing system. Similar ones can be listed as Apache
Kafka, Msmq, Microsoft Azure Service Bus, Kestrel, ActiveMQ. Its purpose is
to transmit a message received from any source to another source as soon as
it is their turn. In other words, all transactions can be listed in a queue until
the source to be transmitted gets up. RabbitMQ’s support for multiple
operating systems and open source code is one of the most preferred
reasons.
Main Logic of RabbitMQ
Producer: The source of the message is the application.
Queue: Where messages are stored. The sent messages are put in a queue
before they are received. All incoming messages are stored in Queue, that is
memory.
Consumer: It is the server that meets the sent message. It is the application
that will receive and process the message on the queue.
Message: The data we send on the queue.
Exchange: It is the structure that decides which queues to send the
messages. It makes the decision according to routing keys.
Binding: The link between exchance and queue.
FIFO: The order of processing of outgoing messages in RabbitMQ is first in
first out.
The Producer project sends a message to be queued. The message is received
by the Exchange interface and redirects to one or more queues according to
various rules.
Queue Properties
Name: The name of the queue we have defined.
Durable: Determines the lifetime of the queue. If we want persistence, we
have to set it true. We use it in-memory in the project. In this case, the queue
will be deleted when the broker is restart.
Exclusive: Contains information whether the queue will be used with other
connections.
AutoDelete: Contains information about deletion of the queue with the data
sent to the queue passes to the consumer side.
RabbitMQ Exchange Types
RabbitMQ is based on a messaging system like below.
Direct Exchange: The use of a single queue is being addressed. A routing key
is determined according to the things to be done and accordingly, the most
appropriate queue is reached with the relevant direct exchange.
Topic Exchange: In Topic Exchanges, messages are sent to different queues
according to their subject. The incoming message is classified and sent to
the related queue. A route is used to send the message to one or more
queues. It is a variation of the Publish / Subscribe pattern. If the problem
concerns several consumers, Topic Exchange should be used to determine
what kind of message they want to receive.
Fanout Exchange: It is used in situations where the message should be sent
to more than one queue. It is especially applied in Broadcasting systems. It is
mainly used for games for global announcements.
Headers Exchange: Here you are guided by the features added to the header
of the message. Routing-Key used in other models is not used. Transmits to
the correct queue with a few features and descriptions in message headers.
The attributes on the header and the attributes on the queue must match
each other’s values.
Analysis & Design RabbitMQ & BuildingBlocks
EventBus.Messages
This project will be the Class Library which basically perform event bus
operations with RabbitMQ and use this library when communicating Basket
and Ordering microservices.
We are going to make Async communication between Basket and Ordering
microservices with using RabbitMQ and MassTransit. When basket checkout
operation performed, we are going to create basketcheckout event and
consume from Ordering microservices.
MassTransit
Let me give some brief information about MassTransit. An open-source
lightweight message bus framework for .NET. MassTransit is useful for
routing messages over MSMQ, RabbitMQ, TIBCO, and ActiveMQ service
busses, with native support for MSMQ and RabbitMQ. MassTransit also
supports multicast, versioning, encryption, sagas, retries, transactions,
distributed systems, and other features. We are going to use the publish/
sbuscribe feature over the Rabbitmq for our application for publishing the
basket checkout orders.
So we are going to develop BuildingBlocks EventBus.Messages Class Library.
Inside of this common class libary we will create BasketCheckoutEvent event
class.
Publisher/Subscriber of BasketCheckout Event
Here is the another view of Publisher/Subscriber of BasketCheckout Event.
This is the E2E use case of BasketCheckout event.
1- BasketCheckout command will comes from the client application
2- Basket microservices perform their operations like removing basket in
redis database, because this is going to be an order
3- Basket microservices publish BasketCheckout event to the RabbitMQ with
using MassTransit
4- This queue typically implemented with topic messaging thecnology
So Subscriber microservices, in our case it will be Ordering microservices
will consume this event and Create an order in their sql server database.
We should define our Event Bus use case analysis.
Our main use cases;
• Create RabbitMQ Connection
• Create BasketCheckout Event
• Develop Basket Microservices as Producer of BasketCheckout Event
• Develop Ordering Microservices as Consumer of BasketCheckout Event
• Update Basket and Items (add — remove item on basket)
• Delete Basket
• Checkout Basket
RabbitMQ Setup with Docker
Here is the docker commands that basically download RabbitMQ in your
local and use make collections.
In order to download and run RabbitMQ from docker hub use below
commands;
docker run -d — hostname my-rabbit — name some-rabbit -p 15672:15672 -p
5672:5672 rabbitmq:3-management
After if you run the docker ps command, the rabbitMQ container, the details
of which can be found.
Or we can add RabbitMQ image into Docker-Compose File for MultiContainer Docker Environment.
Now we can add rabbitmq image into our docker-compose.yml files
docker-compose.yml
rabbitmq:
image: rabbitmq:3-management-alpine
docker-compose.override.yml
rabbitmq:
container_name: rabbitmq
restart: always
ports:
— “5672:5672”
— “15672:15672”
— We set rabbitmq configuration as per environment variables.
Finally, we can create RabbitMq image for our Basket and Ordering
microservices.
Open In Terminal, RUN with below command on that location;
docker-compose -f docker-compose.yml -f docker-compose.override.yml
up -d
After this command you can watch your events from RabbitMQ
Management Dashboard in 15672 port;
http://localhost:15672/#/queues
username: guest
password: guest
The above image includes RabbitMQ dashboard metrics.
Library & Frameworks
Setting Up The The Publisher Microservice
• Install-Package MassTransit
• Install-Package MassTransit.RabbitMQ
• Install-Package MassTransit.AspNetCore
Developing BuildingBlocks EventBus.Messages Class Library
We are going to Develop BuildingBlocks EventBus.Messages Class Library.
We are going to create below folder structure — Shared Model Library.
BuildingBlocks
EventBus.Messages
IntegrationEvent
BasketCheckoutEvent
Create “Events” folder
Add New Class — IntegrationBaseEvent.cs
public class IntegrationBaseEvent
{
public IntegrationBaseEvent()
{
Id = Guid.NewGuid();
CreationDate = DateTime.UtcNow;
}
public IntegrationBaseEvent(Guid id, DateTime createDate)
{
Id = id;
CreationDate = createDate;
}
public Guid Id { get; private set; }
public DateTime CreationDate { get; private set; }
}
Now we can create our basket checkout event in here.
We create this event in the class library because this data will be shared for
Basket and Ordering microservices.
Add New Class
BasketCheckoutEvent.cs
public class BasketCheckoutEvent : IntegrationBaseEvent
{
public string UserName { get; set; }
public decimal TotalPrice { get; set; }
// BillingAddress
public string FirstName { get; set; }
public string LastName { get; set; }
public string EmailAddress { get; set; }
public string AddressLine { get; set; }
public string Country { get; set; }
public string State { get; set; }
public string ZipCode { get; set; }
// Payment
public string CardName { get; set; }
public string CardNumber { get; set; }
public string Expiration { get; set; }
public string CVV { get; set; }
public int PaymentMethod { get; set; }
}
Produce RabbitMQ Event From Basket Microservice Publisher
of BasketCheckoutEvent
We are going to Produce RabbitMQ Event From Basket Microservice
Publisher of BasketCheckoutEvent.
We are going to make Async communication between Basket and Ordering
microservices with using RabbitMQ and MassTransit.
When basket checkout operation performed, we are going to create
basketcheckout event and consume from Ordering microservices.
In this section, we are focusing on producing event in Basket microservices.
Before we start, we need to add project refernces.
Go to Basket.API
Add Project Reference
EventBus.Messages
This will also adding project references on Ordering.API later — common
library for eventbus messages.
Install required Nuget Packages — Setting Up The The Publisher
Microservice
• Install-Package MassTransit
• Install-Package MassTransit.RabbitMQ
• Install-Package MassTransit.AspNetCore
We need to configure MassTransit in order to connect with RabbitMQ in our
aspnet project.
Configuring MassTrasit — Add DI Configuration
Basket.API — Startup.cs
// MassTransit-RabbitMQ Configuration
services.AddMassTransit(config => {
config.UsingRabbitMq((ctx, cfg) => {
cfg.Host(Configuration["EventBusSettings:HostAddress"]);
});
});
appsettings.json
"EventBusSettings": {
"HostAddress": "amqp://guest:guest@localhost:5672"
},
Adds the MassTransit Service to the ASP.NET Core Service Container.
Creates a new Service Bus using RabbitMQ. Here we pass paramteres like the
host url, username and password. Don’t forget to move host name as a
configuration value from appsettings json file.
Publish BasketCheckout Queue Message Event in Basket.API
Controller Class
We are going to Publish BasketCheckout Queue Message Event in Basket.API
Controller Class.
Developing BasketCheckout API Method — BasketController.cs
private readonly IBasketRepository _repository;
private readonly DiscountGrpcService _discountGrpcService;
private readonly IPublishEndpoint _publishEndpoint;
private readonly IMapper _mapper;
public BasketController(IBasketRepository repository,
DiscountGrpcService discountGrpcService, IPublishEndpoint
publishEndpoint, IMapper mapper)
{
_repository = repository ?? throw new
ArgumentNullException(nameof(repository));
_discountGrpcService = discountGrpcService ?? throw new
ArgumentNullException(nameof(discountGrpcService));
_publishEndpoint = publishEndpoint ?? throw new
ArgumentNullException(nameof(publishEndpoint));
_mapper = mapper ?? throw new ArgumentNullException(nameof(mapper));
}
[Route(“[action]”)]
[HttpPost]
[ProducesResponseType((int)HttpStatusCode.Accepted)]
[ProducesResponseType((int)HttpStatusCode.BadRequest)]
public async Task<IActionResult> Checkout([FromBody] BasketCheckout
basketCheckout)
{
// get existing basket with total price
// Set TotalPrice on basketCheckout eventMessage
// send checkout event to rabbitmq
// remove the basket
// get existing basket with total price
var basket = await _repository.GetBasket(basketCheckout.UserName);
if (basket == null)
{
return BadRequest();
}
// send checkout event to rabbitmq
var eventMessage = _mapper.Map<BasketCheckoutEvent>(basketCheckout);
eventMessage.TotalPrice = basket.TotalPrice;
await _publishEndpoint.Publish(eventMessage);
// remove the basket
await _repository.DeleteBasket(basket.UserName);
return Accepted();
}
Publishing an event to the rabbit mq is very easy when you are using
masstransit. In order to publish messages to rabbitmq, the important object
is IPublishEndpoint.
As you can see that we have Published BasketCheckout Queue Message Event
in Basket.API Controller Class.
Consume RabbitMQ Event From Ordering Microservice
Subscriber of BasketCheckoutEvent
We are going to Consume RabbitMQ Event From Ordering Microservice
which is Subscriber of BasketCheckoutEvent.
We are going to make Async communication between Basket and Ordering
microservices with using RabbitMQ and MassTransit.
When basket checkout operation performed, we are going to create
basketcheckout event and consume from Ordering microservices.
In the last section, we have developed for publishing Basket Checkout Queue
from Basket API. In this section, we are focusing on consuming event in
Ordering microservices.
Go to Ordering.API — Add Project Reference
• EventBus.Messages
Setting Up The The Publisher Microservice
• Install-Package MassTransit
• Install-Package MassTransit.RabbitMQ
• Install-Package MassTransit.AspNetCore
We need to configure MassTransit in order to connect with RabbitMQ in our
aspnet project.
Add DI Configuration — Ordering.API — Startup.cs
// MassTransit-RabbitMQ Configuration
services.AddMassTransit(config => {
config.AddConsumer<BasketCheckoutConsumer>();
config.UsingRabbitMq((ctx, cfg) => {
cfg.Host(Configuration[“EventBusSettings:HostAddress”]);
cfg.ReceiveEndpoint(EventBusConstants.BasketCheckoutQueue, c => {
c.ConfigureConsumer<BasketCheckoutConsumer>(ctx);
});
});
});
services.AddMassTransitHostedService();
appsettings.json
“EventBusSettings”: {
“HostAddress”: “amqp://guest:guest@localhost:5672”
},
Now we can create consumer class.
— Ordering.API -BasketCheckoutConsumer
Create Folder
“EventBusConsumer”
Add Class — BasketCheckoutConsumer
public class BasketCheckoutConsumer : IConsumer<BasketCheckoutEvent>
{
public async Task Consume(ConsumeContext<BasketCheckoutEvent>
context)
{
var command = _mapper.Map<CheckoutOrderCommand>(context.Message);
var result = await _mediator.Send(command);
_logger.LogInformation(“BasketCheckoutEvent consumed successfully.
Created Order Id : {newOrderId}”, result);
}
}
As you can see that we have successfully Subscribed BasketCheckout Queue
Message Event in Ordering.API BasketCheckoutConsumer Class.
Test BasketCheckout Event in Basket.API and Ordering.API
Microservices
We are going to Test BasketCheckout Event in Basket.API and Ordering.API
Microservices.
Before we start to test, verify that rabbitmq docker image is running well.
Also it is good practice to run docker-compose command in order to validate
to work all microserices what we have developed so far.
Open in Terminal
Start Docker
docker-compose -f docker-compose.yml -f docker-compose.override.yml
up -d
RUN on DOCKER
See from Rabbit Management Dashboard
http://localhost:15672
Testing
First we need to create basket
POST
/api/v1/Basket
POST PAYLOAD
{
“UserName”: “swn”,
“Items”: [
{
“Quantity”: 2,
“Color”: “Red”,
“Price”: 500,
“ProductId”: “60210c2a1556459e153f0554”,
“ProductName”: “IPhone X”
},
{
“Quantity”: 1,
“Color”: “Blue”,
“Price”: 500,
“ProductId”: “60210c2a1556459e153f0555”,
“ProductName”: “Samsung 10”
}
]
}
returns
{
“userName”: “swn”,
…
“totalPrice”: 1100
}
After that checkout the basket
POST
/api/v1/Basket — checkout
POST PAYLOAD
{
“userName”: “swn”,
“totalPrice”: 0,
“firstName”: “swn”,
“lastName”: “swn”,
“emailAddress”: “string”,
“addressLine”: “string”,
“country”: “string”,
“state”: “string”,
“zipCode”: “string”,
“cardName”: “string”,
“cardNumber”: “string”,
“expiration”: “string”,
“cvv”: “string”,
“paymentMethod”: 1
}
Check RabbitMQ Dashboard
http://localhost:15672/#/
— See that exchange types created successfully.
— See Queues
Queue basketcheckout-queue
— See exchanges and bindings.
SUCCESS !!!
As you can see that we have successfully Tested BasketCheckout Event
publish/subscriber models in Basket.API and Ordering.API Microservices.
Conclusion
See from Rabbit Management Dashboard;
http://localhost:15672/#/queues/%2F/basketCheckoutQueue
The queue should be pop 1 and will consume from Ordering microservices.
Queue Record created as below name what we defined our code.
We created a library which provide to communication along Basket and
Ordering microservices with creating a class library project.
Also we saw that the event bus implementation with RabbitMQ &
MassTransit that microservices publish events, and receive events.
Get Udemy Course with discounted — Microservices Architecture and
Implementation on .NET.
Get the Source Code from AspnetRun Microservices Github
Follow Series of Microservices Articles
This is the introduction of the series. This will be the series of articles. You
can follow the series with below links.
• 0- Microservices Architecture on .NET with applying CQRS, Clean
Architecture and Event-Driven Communication
• 1- Microservice Using ASP.NET Core, MongoDB and Docker Container
• 2- Using Redis with ASP.NET Core, and Docker Container for Basket
Microservices
• 3- Using PostgreSQL and Dapper with ASP.NET and Docker Container for
Discount Microservices
• 4- Building Ocelot API Gateway Microservices with ASP.NET Core and
Docker Container
• 5- Microservices Event Driven Architecture with RabbitMQ and Docker
Container on .NET
• 6- CQRS and Event Sourcing in Event Driven Architecture of Ordering
Microservices
• 7- Microservices Cross-Cutting Concerns with Distributed Logging and
Microservices Resilience
• 8- Securing Microservices with IdentityServer4 with OAuth2 and
OpenID Connect fronted by Ocelot API Gateway
• 9- Using gRPC in Microservices for Building a high-performance
Interservice Communication with .Net 5
• 10-Deploying .Net Microservices to Azure Kubernetes Services(AKS) and
Automating with Azure DevOps
Microservices
Rabbitmq
Eventbus
Aspnetcore
Docker
Sign up
Open in app
Search
CQRS and Event Sourcing in Event
Driven Architecture of Ordering
Microservices
Mehmet Ozkaya · Follow
Published in aspnetrun · 20 min read · May 22, 2020
110
2
Building Ordering Microservices with Clean Architecture and CQRS
Implementation with using MediatR, FluentValidation and AutoMapper.
Write
Sign in
Introduction
In this article we will show how to perform Ordering microservices
operations on ASP.NET Core Web API using Entity Framework Core with Sql
Server Database applying Clean Architecture and CQRS.
By the end of the article, we will have a Web API which implemented CRUD
operations over Order entity with implementing CQRS design pattern using
MediatR, MediatR, FluentValidation and AutoMapper packages.
Developing Ordering microservice which includes;
• ASP.NET Core Web API application
• REST API principles, CRUD operations
• Entity Framework Core Code-First Approach
• Implementing DDD, CQRS, and Clean Architecture with using Best
Practices applying SOLID principles
• Developing CQRS implementation on commands and queries with using
MediatR, FluentValidation and AutoMapper packages
• SqlServer database connection and containerization
• Using Entity Framework Core ORM and auto migrate to SqlServer when
application startup
• Consuming RabbitMQ BasketCheckout event queue with using
MassTransit-RabbitMQ Configuration
We will Analysis and Architecting of Ordering Microservices, applying Clean
architecture and CQRS Design Pattern. Containerize Ordering Microservices
with SqlServer database using Docker Compose.
So in this section, we are going to Develop Ordering.API Microservices with
SqlServer.
Background
You can follow the previous article which explains overall microservice
architecture of this example.
Check for the previous article which explained overall microservice
architecture of this repository.
We will focus on Ordering microservice from that overall e-commerce
microservice architecture.
Step by Step Development w/ Udemy Course
Get Udemy Course with discounted — Microservices Architecture and
Implementation on .NET.
Source Code
Get the Source Code from AspnetRun Microservices Github — Clone or fork
this repository, if you like don’t forget the star. If you find or ask anything you
can directly open issue on repository.
Prerequisites
• Install the .NET Core 5 or above SDK
• Install Visual Studio 2019 v16.x or above
• Docker Desktop
Ordering.API Microservices
Ordering microservice use ASP.NET Core Web API reference application
with Entity Framework Core, demonstrating a layered application
architecture with DDD best practices. Implements N-Layer Hexagonal
architecture (Core, Application, Infrastructure and Presentation Layers) and
Domain Driven Design (Entities, Repositories, Domain/Application Services,
DTO’s…) and aimed to be a Clean Architecture, with applying SOLID
principles in order to use for a project template.
Also implements CQRS Design Pattern into their layers in order to separate
Queries and Command for Ordering microservice.
We will be following best practices like loosely coupled, dependencyinverted architecture and using design patterns such as Dependency
Injection, logging, validation, exception handling, localization and so on.
First, we are going to start with remembering the core concepts, principles
of layered architecture design. We are going to start with small principles
and continue to design patterns which all these items are used our ASP.NET
Core Layered Architecture project.
Before start with the architecture, we should know the Design Principles and
some of the Design Patterns over the clean architecture.
Domain Driven Design (DDD)
Domain Driven Design (DDD) is not an improved technology or specific
method. Domain Driven Design (DDD) is an approach that tries to bring
solutions to the basic problems frequently experienced in the development
of complex software systems and in ensuring the continuity of our
applications after the implementation of these opposing projects. To
understand Domain Driven Design (DDD), some basic concepts need to be
mastered. With these concepts, we can enter the Domain Driven Design
(DDD).
Ubiquitous Language
It is one of the cornerstones of Domain Driven Design (DDD). We need to be
able to produce the desired output of the software developers and to ensure
the continuity of this output to be able to speak the same language as the
Domain Expert. Afterwards, we must transfer this experience to the
methods and classes that we use while developing our applications by giving
the names of the concepts used by experts. Every service we will use in our
project must have a response in the domain. Thus, everyone involved in the
project can speak this common language and understand each other.
Bounded Context
A recommended approach to use in Domain Driven Design (DDD) complex
systems. The complex may contain sub domains within a domain. It should
also include Domain Driven Design (DDD). Example of e-commerce site; ·
Order management · Customer management · Stock management · Delivery
management · Payment System management · Product management · User
management may contain many sub domains. As these sub domains are
grouped, Bounded Context refers to structures in which the group of
individuals most logically associated with each other in terms of the rules of
the Aggregate Roots are grouped together and the responsibilities of this
group are clearly defined.
Clean Architecture (aka Ports and Adaptors)
Hexagonal Architecture (aka Ports and Adapters) is one strategy to decouple
the use cases from the external details. It was coined by Alistar Cockburn
more than 13 years ago, and this received improvements with the Onion and
Clean Architectures.
Ordering Microservice, implements N-Layer Hexagonal architecture (Core,
Application, Infrastructure and Presentation Layers) and Domain Driven
Design (Entities, Repositories, Domain/Application Services, DTO’s…). Also
implements and provides a good infrastructure to implement best practices
such as Dependency Injection, logging, validation, exception handling,
localization and so on.
Aimed to be a Clean Architecture also called Onion Architecture, with
applying SOLID principles in order to use for a project template. The below
image represents approach of development architecture of run repository
series;
According to this diagram, we applied these layers and detail components in
our project.
CQRS (Command Query Responsibility Segregation) Design
Pattern
CQRS means separation of command and query responsibilities. Although it
has become an increasingly popular pattern in recent years, it has increased
in popularity after this article written by Martin Fowler when he saw this
pattern by Greg Young.
We can say that it is based on the CQS (Command Query Separation)
principle for the CQRS architecture. The main idea of CQS is to separate the
interfaces between our operations that read the data and the operations that
update the data. In CQRS, this is added to the separation of our business
models.
CQRS is a software development pattern based on conducting reading and
writing / updating processes on different models. The data you read and the
data you write are stored in different database tools.
Command — First of all, command type is the only way to change something
in the system. If there is no command, the state of the system remains
unchanged. Command types should not return any value. Command types
often act as plain objects that are sent to CommandHandler as parameters
when used with CommandHandler and CommandDispatcher types.
Query — The only way to do similar reading is the type of Query. They
cannot change the state of the system. They are generally used with
QueryHandler and QueryDispatcher types.
After doing Command and Query parsing in our system, it will be clear that
the domain model we use for data writing is not suitable for reading data.
Eventual Consistent
All models in consistent systems are stored consistently without
interruption. In Eventual consistent systems, the models may be
inconsistent for a while as a result of writing / updating processes. This
situation is finally resolved, and the system will eventually be consistent.
In systems developed using the CQRS software pattern, the write / update
requests from the client are transmitted to the existing services on the
system with the write model. If the request is processed, the necessary
changes are made to the reading model. When the user queries data, the
services answer the requests with the reading model.
CQRS is the updating of the reading model with asynchronous processes
after the writing model is registered. This method is the reason why the term
Eventual Consistency appeared. Generally, systems developed with CQRS
design are eventual consistent systems.
Since there is no transactional dependency in eventual consistent systems,
reading and writing actions do not wait for each other. For this reason, CQRS
systems performance is important.
Eventual Sourcing
Event Sourcing is a method shaped on the main idea of accumulating events
that took place in our system. Objects that are one of the main parts of the
system that have an identity are called entities.
In systems developed with Event Sourcing, the latest status of the assets are
not recorded. Instead, events affecting the state of the assets are recorded.
When the query is sent by the client and the final status of the asset is
requested, the system combines the existing Event information and provides
the client with the necessary information.
Events occurring in order will constitute the result Entity to us. In other
words, if an entity is expressed as the sum of the Events that will create this
line, not as a data line, it can offer us an Eventual Consistent structure.
And if our system consisting of many Resource is built with Event Sourcing
in this way, we can perform Eventual Consistent operations through these
systems. As an example, we can express the transaction in a stock service as
in the picture.
In the Event Sourcing structure, when the client requests the data, the
breakdown of the events related to the asset is created. The state of the data
is created from this transcript. It is obvious that the process of generating
data from the data will more slower results compared to the traditional
method where the data is kept ready.
We can solve the last two problems that occur when using Event Sourcing
method (re-creating the asset every time and filtering the asset according to
its areas) together with CQRS. For this reason, Event Sourcing and CQRS are
frequently mentioned together.
CQRS and Event Sourcing
There are reading and writing models on the CQRS Design Pattern. In the
Event Sourcing method, event information which affects the state of the
asset, is not stored.
Consider event writing as a model and the final state of our existence as a
reading model. The client starts an operation to change the state of the asset
on the system. Event information generated as a result of the operation is
stored in our system as a model for writing. In this case, our process is
triggered and our reading model is updated depending on our writing
model.
As a result, we had the Event Sourcing structure provide to the writing model
in our system. It is possible for us to reach the state of our system at any time
we want. On the other hand, we solve our problem by using the reading
model for query operations and performance problem.
Mediator Design Pattern
As can be understood from the name of Mediator, it is a class created with
the logic of performing this operation by using an intermediate class to
perform the connection and other operations between the classes derived
from the same interface.
One of the most frequently given examples is the tower structure, which
gives the permission of the aircraft at the airports. Airplanes do not get
permission from each other. All airplanes only get permission from the
tower and the operations take place accordingly. The vehicle here becomes
the tower.
We will use this Mediator Pattern when using MediatR nuget packages in
Ordering.Application project.
· MediatR — To implement Mediator Pattern
Code Structure
According to clean architecture and cqrs implementation, we applied these
layers and detail components in our project. So, the result of the project
structure occurred as below;
In order to applying Clean architecture and CQRS Design Pattern, we should
create these 4 layer.
• Ordering.Domain Layer
• Ordering.Application Layer
• Ordering.API Layer
• Ordering.Infrastructure Layer
Ordering.Domain Layer and Ordering.Application Layer will be the Core
Ordering.API Layer and Ordering.Infrastructure Layer will be the Periphery
layer of our reference application.
NOTE: In this article I am going to follow only CQRS implemented part of code
which is Ordering.Application layer. You can follow the whole code from github
repository.
Library & Frameworks
For Basket microservices, we have to libraries in our Nuget Packages,
Ordering.Application
• MediatR.Extensions.Microsoft.DependencyInjection
• FluentValidation
• FluentValidation.DependencyInjectionExtensions
• AutoMapper
• AutoMapper.Extensions.Microsoft.Dependency
• Microsoft.Extensions.Logging.Abstractions
ProjectReference — Ordering.Domain
Developing Ordering.Application Layer with CQRS Pattern
Implementation in Clean Architecture
This layer for Development of Domain Logic with implementation.
Interfaces drives business requirements and implementations in this layer.
Application Layer in order to implement our business logics, use case
operations. The first point of implementation is definition of Model classes.
The use case of projects should be handled by Application layer. So, we are
following to CQRS Design pattern when creating classes.
Application layer should be cover all business use cases with abstractions. So
this layer should responsible for business use cases, business validations,
flows and so on.
Now we can create main folders of Ordering.Application project.
Main Folders — Create Folders
Contracts
Features
Behaviours
Create all these 3 folders.
These are main objectives should handle in application.
Application Contracts
Application Features
Application Behaviours
Application Contracts
This folder should cover application capabilities. This should include
Interfaces for abstracting use case implementations.
Application Features
This folder will apply CQRS design patterns for handling business use cases.
We will create sub folder according to use case definitions like get orders by
user name or checkout order and so on..
Application Behaviours
This folder will responsible for Application Behaviours that apply when
performing use case implementations. In example validations, logging,
cross-cutting concerns an so on.
Developing Ordering.Application Layer — Application
Contracts
We are going to Develop Application Contracts of Ordering.Application Layer
with CQRS Pattern Implementation in Clean Architecture.
Go to “Contracts” folder
After that examine “Contracts”
Create sub folders
“Persistence”
“Infrastructure”
— Create Persistence classes
“Persistence”
IAsyncRepository
IOrderRepository
IAsyncRepository
public interface IAsyncRepository<T> where T : EntityBase
{
Task<IReadOnlyList<T>> GetAllAsync();
Task<IReadOnlyList<T>> GetAsync(Expression<Func<T, bool>>
predicate);
Task<IReadOnlyList<T>> GetAsync(Expression<Func<T, bool>> predicate
= null,
Func<IQueryable<T>, IOrderedQueryable<T>> orderBy = null,
string includeString = null,
bool disableTracking = true);
Task<IReadOnlyList<T>> GetAsync(Expression<Func<T, bool>> predicate
= null,
Func<IQueryable<T>, IOrderedQueryable<T>> orderBy = null,
List<Expression<Func<T, object>>> includes = null,
bool disableTracking = true);
Task<T> GetByIdAsync(int id);
Task<T> AddAsync(T entity);
Task UpdateAsync(T entity);
Task DeleteAsync(T entity);
}
IOrderRepository
public interface IOrderRepository : IAsyncRepository<Order>
{
Task<IEnumerable<Order>> GetOrdersByUserName(string userName);
}
With these repository classes we handled database related actions and
abstracted with creating interfaces.
These interface will implement in Infrastructure layer with uisng ef.core and
sql server.
But it doesnt important from Application layer, this interface could be
handle any orm and databases not affecting application layer.
By this abstraction these infrasturcture changes could be made very easy.
Think about that, you can create 2 infrasturcutre layer 1 for ef.core-sqlserver
other coukld be dapper-postgresql and you can change your infrastructure
layer according to configuration.
By this way you can easly compare performance and decide to go your
implementations.
After that we can develop other external infrasturture contracts.
“Infrastructure”
IEmailService
public interface IEmailService
{
Task<bool> SendEmail(Email email);
}
This interface also will implement on Infrasturcture layer with any external
mail sender package. We dont interested with that in Application. We focus
on customer requirements.
Customer wants from us to send mail when new order coming. So we will do
it with this interface. After that these implementations will configure on
presentation layer in aspnet built-in dependency injection.
MediatR Nuget Package
We will use this Mediator Pattern when using MediatR nuget packages in
Ordering.Application project.
· MediatR — To implement Mediator Pattern
MediatR allows us to easily create and send Command and Query objects to
the correct Command/Query Handlers.
You can see here, MediatR takes Command and Query objects and trigger to
Handler classes.
Also you can find a MediatR request pipeline lifecycle for particular request.
As you can see that when Request accommodating in MediatR,
we can put some interceptors as a pre and post processor behaviors.
MediatR Pipeline Behaviour
You can see in this picture more clearly,
Here is Tracing behavior adding on mediatr as a pipeline behaviour.
Pre and Post request handles in Pipeline behaviours and perform tracing
operations between request handler operations.
Also we will Apply the Validation pipeline behavior with using Fluent
Validator nuget package.
High level “Writes-side” in CQRS
Here is an example of 1 Command Request to Ordering.Application layer.
A request comes from API call from Ordering.API and call the
CommandHandler class in Ordering.Application with helping from MediatR
nuget package.
In command handler class, it uses Infrastructure objects that perform
database related operations.
In our case we will checkout the order and save order record.
As you can see that we have seen that CQRS Implementation with Mediator
Design Pattern.
Developing Ordering.Application Layer — Application Features —
GetOrdersListQuery
We are going to Develop Application Features of Ordering.Application Layer
with CQRS Pattern Implementation in Clean Architecture.
We have seperated Commands and Queries, because we are going to apply
CQRS design pattern.
This pattern basically explains that seperating read model and write model.
So we will obey this best practices on this Ordering Application layer.
Queries
Create folder
GetOrdersList
Add New Class into that folder;
GetOrdersListQuery.cs
public class GetOrdersListQuery : IRequest<List<OrdersVm>>
{
public string UserName { get; set; }
public GetOrdersListQuery(string userName)
{
UserName = userName ?? throw new
ArgumentNullException(nameof(userName));
}
}
According to MediatR implementation, every IRequest implementation
should have Handler classes. MediatR trigger this handler classes when
Request comes.
GetOrdersListQueryHandler.cs
public class GetOrdersListQueryHandler :
IRequestHandler<GetOrdersListQuery, List<OrdersVm>>
{
private readonly IOrderRepository _orderRepository;
private readonly IMapper _mapper;
public GetOrdersListQueryHandler(IOrderRepository orderRepository,
IMapper mapper)
{
_orderRepository = orderRepository ?? throw new
ArgumentNullException(nameof(orderRepository));
_mapper = mapper ?? throw new ArgumentNullException(nameof(mapper));
}
public async Task<List<OrdersVm>> Handle(GetOrdersListQuery request,
CancellationToken cancellationToken)
{
var orderList = await
_orderRepository.GetOrdersByUserName(request.UserName);
return _mapper.Map<List<OrdersVm>>(orderList);
}
}
This class needs to Dto object for query operation.
Its best practice to seperate your dto objects as per commands and queries.
By this way you can seperate your dependencies with original entities.
public class OrdersVm
{
public int Id { get; set; }
public string UserName { get; set; }
public decimal TotalPrice { get; set; }
// BillingAddress
public string FirstName { get; set; }
public string LastName { get; set; }
public string EmailAddress { get; set; }
public string AddressLine { get; set; }
public string Country { get; set; }
public string State { get; set; }
public string ZipCode { get; set; }
// Payment
public string CardName { get; set; }
public string CardNumber { get; set; }
public string Expiration { get; set; }
public string CVV { get; set; }
public int PaymentMethod { get; set; }
}
Developing Ordering.Application Layer — Application
Command Features — CheckoutOrder
We are going to Develop Application Command Features of
Ordering.Application Layer with CQRS Pattern Implementation in Clean
Architecture.
Go to “Orders” -> “Commands” folder
Create “CheckoutOrder” folder
Develop
CheckoutOrderCommand.cs
CheckoutOrderCommandHandler.cs
—
CheckoutOrderCommand.cs
public class CheckoutOrderCommand : IRequest<int>
{
public string UserName { get; set; }
public decimal TotalPrice { get; set; }
// BillingAddress
public string FirstName { get; set; }
public string LastName { get; set; }
public string EmailAddress { get; set; }
public string AddressLine { get; set; }
public string Country { get; set; }
public string State { get; set; }
public string ZipCode { get; set; }
// Payment
public string CardName { get; set; }
public string CardNumber { get; set; }
public string Expiration { get; set; }
public string CVV { get; set; }
public int PaymentMethod { get; set; }
}
— Now according to mediatR we should create handler class for this request
and implement business logic. Our business logic is creating order and send
mail.
CheckoutOrderCommandHandler.cs
public class CheckoutOrderCommandHandler :
IRequestHandler<CheckoutOrderCommand, int>
{
private readonly IOrderRepository _orderRepository;
private readonly IMapper _mapper;
private readonly IEmailService _emailService;
private readonly ILogger<CheckoutOrderCommandHandler> _logger;
public CheckoutOrderCommandHandler(IOrderRepository orderRepository,
IMapper mapper, IEmailService emailService,
ILogger<CheckoutOrderCommandHandler> logger)
{
_orderRepository = orderRepository ?? throw new
ArgumentNullException(nameof(orderRepository));
_mapper = mapper ?? throw new ArgumentNullException(nameof(mapper));
_emailService = emailService ?? throw new
ArgumentNullException(nameof(emailService));
_logger = logger ?? throw new ArgumentNullException(nameof(logger));
}
public async Task<int> Handle(CheckoutOrderCommand request,
CancellationToken cancellationToken)
{
var orderEntity = _mapper.Map<Order>(request);
var newOrder = await _orderRepository.AddAsync(orderEntity);
_logger.LogInformation($”Order {newOrder.Id} is successfully
created.”);
await SendMail(newOrder);
return newOrder.Id;
}
private async Task SendMail(Order order)
{
var email = new Email() { To = “ezozkme@gmail.com”, Body = $”Order
was created.”, Subject = “Order was created” };
try
{
await _emailService.SendEmail(email);
}
catch (Exception ex)
{
_logger.LogError($”Order {order.Id} failed due to an error with the
mail service: {ex.Message}”);
}
}
}
Lastly, I am going to add Validator for checkout order command.
CheckoutOrderCommandValidator.cs
public class CheckoutOrderCommandValidator :
AbstractValidator<CheckoutOrderCommand>
{
public CheckoutOrderCommandValidator()
{
RuleFor(p => p.UserName)
.NotEmpty().WithMessage(“{UserName} is required.”)
.NotNull()
.MaximumLength(50).WithMessage(“{UserName} must not exceed 50
characters.”);
RuleFor(p => p.EmailAddress)
.NotEmpty().WithMessage(“{EmailAddress} is required.”);
RuleFor(p => p.TotalPrice)
.NotEmpty().WithMessage(“{TotalPrice} is required.”)
.GreaterThan(0).WithMessage(“{TotalPrice} should be greater than
zero.”);
}
}
This validator will work when request comes to MediatR, before running
handler class, this will run this validations.
For validation library, we are going to use FluentValidation. Thats why we
have inherited AbstractValidator class.
We have used RuleFor function which comes from FluentValidation and
provide to define some set of validations. We mostly using empty check and
can able to give messages when validation not passed.
Developing Ordering.Application Layer — Application
Command Features — UpdateOrder
We are going to Develop Update Application Command Features of
Ordering.Application Layer with CQRS Pattern Implementation in Clean
Architecture.
Create “UpdateOrder” folder
Develop
UpdateOrderCommand
UpdateOrderCommandHandler.cs
— UpdateOrderCommand.cs
public class UpdateOrderCommand : IRequest
{
public int Id { get; set; }
public string UserName { get; set; }
public decimal TotalPrice { get; set; }
// BillingAddress
public string FirstName { get; set; }
public string LastName { get; set; }
public string EmailAddress { get; set; }
public string AddressLine { get; set; }
public string Country { get; set; }
public string State { get; set; }
public string ZipCode { get; set; }
// Payment
public string CardName { get; set; }
public string CardNumber { get; set; }
public string Expiration { get; set; }
public string CVV { get; set; }
public int PaymentMethod { get; set; }
}
— Now according to mediatR we should create handler class for this request
and implement business logic. Our business logic is updating order.
UpdateOrderCommandHandler.cs
public class UpdateOrderCommandHandler :
IRequestHandler<UpdateOrderCommand>
{
private readonly IOrderRepository _orderRepository;
private readonly IMapper _mapper;
private readonly ILogger<UpdateOrderCommandHandler> _logger;
public UpdateOrderCommandHandler(IOrderRepository orderRepository,
IMapper mapper, ILogger<UpdateOrderCommandHandler> logger)
{
_orderRepository = orderRepository ?? throw new
ArgumentNullException(nameof(orderRepository));
_mapper = mapper ?? throw new ArgumentNullException(nameof(mapper));
_logger = logger ?? throw new ArgumentNullException(nameof(logger));
}
public async Task<Unit> Handle(UpdateOrderCommand request,
CancellationToken cancellationToken)
{
var orderToUpdate = await _orderRepository.GetByIdAsync(request.Id);
if (orderToUpdate == null)
{
throw new NotFoundException(nameof(Order), request.Id);
}
_mapper.Map(request, orderToUpdate, typeof(UpdateOrderCommand),
typeof(Order));
await _orderRepository.UpdateAsync(orderToUpdate);
_logger.LogInformation($”Order {orderToUpdate.Id} is successfully
updated.”);
return Unit.Value;
}
}
In UpdateOrderCommandHandler class, we have updating order record with
repository object.
Lastly, I am going to add Validator for update order command.
UpdateOrderCommandValidator.cs
public class UpdateOrderCommandValidator :
AbstractValidator<UpdateOrderCommand>
{
public UpdateOrderCommandValidator()
{
RuleFor(p => p.UserName)
.NotEmpty().WithMessage(“{UserName} is required.”)
.NotNull()
.MaximumLength(50).WithMessage(“{UserName} must not exceed 50
characters.”);
RuleFor(p => p.EmailAddress)
.NotEmpty().WithMessage(“{EmailAddress} is required.”);
RuleFor(p => p.TotalPrice)
.NotEmpty().WithMessage(“{TotalPrice} is required.”)
.GreaterThan(0).WithMessage(“{TotalPrice} should be greater than
zero.”);
}
}
This validator will work when request comes to MediatR, before running
handler class, this will run this validations.
For validation library, we are going to use FluentValidation. Thats why we
have inherited AbstractValidator class.
As you can see that we have Developed CheckoutOrderCommand and
UpdateOrderCommand Application Features of Ordering.Application Layer
in Clean Architecture. As the same way, you can develop to
DeleteOrderCommand objects.
Developing Ordering.Application Layer — Application
Behaviours
We are going to Develop Application Behaviours of Ordering.Application
Layer with CQRS Pattern Implementation in Clean Architecture.
As you remember that we had 3 main folders in Application layer.
+ Application Contracts
+ Application Features
— Application Behaviours
Now we have focus on Application Behaviours.
— We can add Application Behaviours by using MediatR.
— MediatR provides us to IPipelineBehavior interface that we can intercept
requests and perform any behavior before executing handle classes.
— Create ValidationBehaviour class
ValidationBehaviour.cs
using ValidationException =
Ordering.Application.Exceptions.ValidationException;
public class ValidationBehaviour<TRequest, TResponse> :
IPipelineBehavior<TRequest, TResponse>
where TRequest : IRequest<TResponse>
{
private readonly IEnumerable<IValidator<TRequest>> _validators;
public ValidationBehaviour(IEnumerable<IValidator<TRequest>>
validators)
{
_validators = validators;
}
public async Task<TResponse> Handle(TRequest request,
CancellationToken cancellationToken,
RequestHandlerDelegate<TResponse> next)
{
if (_validators.Any())
{
var context = new ValidationContext<TRequest>(request);
var validationResults = await Task.WhenAll(_validators.Select(v =>
v.ValidateAsync(context, cancellationToken)));
var failures = validationResults.SelectMany(r => r.Errors).Where(f
=> f != null).ToList();
if (failures.Count != 0)
throw new ValidationException(failures);
}
return await next();
}
}
This class basically, run all validation classes if exist and collect results. If
there is any unvalidate result throwing Custom ValidationException.
This class searching FluentValidator — AbstractValidator objects with
refllection. And run all validators before executing request.
Very good features for performing validation middleware by intercepting
mediatR cqrs requests.
Developing Ordering.Application Layer — Application Service
Registrations
We are going to Develop Application Service Registrations of
Ordering.Application Layer with CQRS Pattern Implementation in Clean
Architecture.
As you know that, In aspnet applications we have registered our objects into
aspnet built-in depdency injection tool.
This register operation handled in;
Startup — ConfigureServices method
public void ConfigureServices(IServiceCollection services)
{
services.AddApplicationServices();
So this is get IServiceCollection and adding services with using Add
methods.
In order to create our extention method we should developed our extention
method in Application Layer. It is good practice to handle every
dependencies into their own layer.
— Create new Class
ApplicationServiceRegistration.cs
public static class ApplicationServiceRegistration
{
public static IServiceCollection AddApplicationServices(this
IServiceCollection services)
{
services.AddAutoMapper(Assembly.GetExecutingAssembly());
services.AddValidatorsFromAssembly(Assembly.GetExecutingAssembly());
services.AddMediatR(Assembly.GetExecutingAssembly());
services.AddTransient(typeof(IPipelineBehavior<,>),
typeof(UnhandledExceptionBehaviour<,>));
services.AddTransient(typeof(IPipelineBehavior<,>),
typeof(ValidationBehaviour<,>));
return services;
}
}
In order to use extention methods, we should install below packages
Install Packages :
• Install-Package AutoMapper.Extensions.Microsoft.DependencyInjection
• Install-Package FluentValidation.DependencyInjectionExtensions
Developing Ordering.API Presentation Layer in Clean
Architecture
We are going to start to Developing Ordering.API Presentation Layer in
Clean Architecture.
As you know that, we have developed Ordering. Core and Application layer.
With application layer we covered all business use cases without any
infrastructure dependencies.
So now, I am pushing one step further and say that we can also develop
Presentation API Controller classes witout any external infrastructure
related objects.
Its one of the best practices that you can handle your apis without any
external infrasturcture objects.
So this is very important to seperate infrastructure and your actual business
logic. Not let me directly develop our API Controller classes without any
external dependencies.
Verify dependencies :
Add Project References:
Ordering.Application
Ordering.Infrastructure
For now we only use Ordering.Application references.
After that we can create new controller for exposing our apis.
Ordering.API — Controllers
Add OrderController.cs
[ApiController]
[Route(“api/v1/[controller]”)]
public class OrderController : ControllerBase
{
private readonly IMediator _mediator;
public OrderController(IMediator mediator)
{
_mediator = mediator ?? throw new
ArgumentNullException(nameof(mediator));
}
[HttpGet(“{userName}”, Name = “GetOrder”)]
[ProducesResponseType(typeof(IEnumerable<OrdersVm>),
(int)HttpStatusCode.OK)]
public async Task<ActionResult<IEnumerable<OrdersVm>>>
GetOrdersByUserName(string userName)
{
var query = new GetOrdersListQuery(userName);
var orders = await _mediator.Send(query);
return Ok(orders);
}
// testing purpose
[HttpPost(Name = “CheckoutOrder”)]
[ProducesResponseType((int)HttpStatusCode.OK)]
public async Task<ActionResult<int>> CheckoutOrder([FromBody]
CheckoutOrderCommand command)
{
var result = await _mediator.Send(command);
return Ok(result);
}
[HttpPut(Name = “UpdateOrder”)]
[ProducesResponseType(StatusCodes.Status204NoContent)]
[ProducesResponseType(StatusCodes.Status404NotFound)]
[ProducesDefaultResponseType]
public async Task<ActionResult> UpdateOrder([FromBody]
UpdateOrderCommand command)
{
await _mediator.Send(command);
return NoContent();
}
[HttpDelete(“{id}”, Name = “DeleteOrder”)]
[ProducesResponseType(StatusCodes.Status204NoContent)]
[ProducesResponseType(StatusCodes.Status404NotFound)]
[ProducesDefaultResponseType]
public async Task<ActionResult> DeleteOrder(int id)
{
var command = new DeleteOrderCommand() { Id = id };
await _mediator.Send(command);
return NoContent();
}
}
As you can see that we have very low code when performing main logics.
We only create mediatR cqrs request object and send this request with
mediatR. Behind this action MediatR creates pipeline for request and trigger
to handle method.
No infrastructure dependencies no application business logics. Presentation
layer only responsible for exposing apis. Business rules handles in
application layer.
But wait, where is implementations, how mediatR get order, insert update
delete orders ?
If you run the application you will get error for not registering objects on
dependency injection. We should register interfaces with implementations.
You can continue to rest of the development with following the resources
below.
Get Udemy Course with discounted — Microservices Architecture and
Implementation on .NET.
Get the Source Code from AspnetRun Microservices Github
Follow Series of Microservices Articles
This is the introduction of the series. This will be the series of articles. You
can follow the series with below links.
• 0- Microservices Architecture on .NET with applying CQRS, Clean
Architecture and Event-Driven Communication
• 1- Microservice Using ASP.NET Core, MongoDB and Docker Container
• 2- Using Redis with ASP.NET Core, and Docker Container for Basket
Microservices
• 3- Using PostgreSQL and Dapper with ASP.NET and Docker Container for
Discount Microservices
• 4- Building Ocelot API Gateway Microservices with ASP.NET Core and
Docker Container
• 5- Microservices Event Driven Architecture with RabbitMQ and Docker
Container on .NET
• 6- CQRS and Event Sourcing in Event Driven Architecture of Ordering
Microservices
• 7- Microservices Cross-Cutting Concerns with Distributed Logging and
Microservices Resilience
• 8- Securing Microservices with IdentityServer4 with OAuth2 and
OpenID Connect fronted by Ocelot API Gateway
• 9- Using gRPC in Microservices for Building a high-performance
Interservice Communication with .Net 5
• 10-Deploying .Net Microservices to Azure Kubernetes Services(AKS) and
Automating with Azure DevOps
Cqrs
Event Sourcing
Microservices
Aspnetcore
Entity Framework Core
Sign up
Open in app
Search
Write
Microservices Observability,
Resilience, Monitoring on .Net
Mehmet Ozkaya · Follow
Published in aspnetrun · 6 min read · Apr 20, 2021
20
Building Cross-Cutting Concerns — Microservices Observability with Distributed
Logging, Health Monitoring, Resilient and Fault Tolerance with using Polly.
Sign in
In this article we will show how to perform Microservices Observability,
Microservices Resilience and Monitoring principles on .Net microservices.
So When you are developing projects in microservices architecture, it is
crucial to following Microservices Observability, Microservices Resilience
and Monitoring principles.
So, we will separate our Microservices Cross-Cutting Concerns in 4 main
pillars;
• Microservices Observability with Distributed Logging
• Microservices Resilience and Fault Tolerance with applying Retry and
Circuit-Breaker patterns using Polly
• Microservices Monitoring with Health Checks using WatchDog
• Microservices Tracing with OpenTelemetry using Zipkin
So we are going to follow this 4 main pillars and develop our microservices
reference application with using latest implementation and best practices on
cloud-native microservices architecture.
See the our big picture what we are going to develop Microservices
Architecture and Step by Step Implementation together.
We have already developed this microservices reference application in the
microservices articles.
Step by Step Development w/ Udemy Course
Get Udemy Course with discounted — Microservices Observability,
Resilience, Monitoring on .Net.
Source Code
Get the Source Code from AspnetRun Microservices Github — Clone or fork
this repository, if you like don’t forget the star. If you find or ask anything you
can directly open issue on repository.
Microservices Cross-Cutting Concerns
So with this article, we will extend this microservices reference application
with Cross-Cutting Concerns for provide microservices resilience.
We are going to cover Cross-Cutting Concerns in 4 main parts;
1-Microservices Observability with Distributed Logging
This applying Elastic Stack which includes elasticsearh + logstach + kibana
and SeriLog nuget package for .net microservices.
We will docker-compose Kibana image from docker hub and feed kibana
with elastic stack.
2-Microservices Resilience and Fault Tolerance using Polly
This will apply retry and circuit-breaker patterns on microservices
communication with creating polly policies.
3-Microservices Health Monitoring with using WatchDog
This will be the aspnet health check implementation with custom hc
methods
which inludes database availabilities — for example in basket microservices,
we will add sub-hc contitons for connecting Redis, and RabbitMQ.
4-Microservices Distributed Tracing with OpenTelemetry using Zipkin
This will be the implementation of OpenTelemetry with Zipkin.
By the end of this article, you’ll learn how to design and developing
Microservices Cross-Cutting Concerns — Microservices Observability with
Distributed Logging, Health Monitoring, Resilient and Fault Tolerance with
using Polly on .Net Microservices.
Background
This is the introduction of the series. This will be the series of articles. You
can follow the series with below links.
• 0- Microservices Observability, Resilience, Monitoring on .Net
• 1- Microservices Observability with Distributed Logging using
ElasticSearch and Kibana
• 2- Microservices Resilience and Fault Tolerance with applying Retry and
Circuit-Breaker patterns using Polly
• 3- Microservices Monitoring with Health Checks using WatchDog
Also you can find whole microservices series on below article;
Check for the previous article which explained overall microservice
architecture of this repository.
We will focus on microservices cross-cutting concerns on these article
series.
Prerequisites
• Install the .NET Core 5 or above SDK
• Install Visual Studio 2019 v16.x or above
• Docker Desktop
Microservices = Distributed Systems
Microservice architecture have become the new model for building modern
cloud-native applications. And microservices-based applications are
distributed systems.
https://docs.microsoft.com/en-us/dotnet/architecture/microservices/implement-resilient-applications/
handle-partial-failure
While architecting distributed microservices-based applications, it get lots of
benefits like makes easy to scale and manage services, but as the same time,
it is increasing interactions between those services have created a new set of
problems.
Distributed Dependencies
So we should assume that failures will happen, and we should dealing with
unexpected failures, especially in a multiple Distributed Dependencies
systems.
https://docs.microsoft.com/en-us/dotnet/architecture/microservices/implement-resilient-applications/
handle-partial-failure
For example in case of network or container failures, microservices must
have a strategy to retry requests again.
Dealing with Failures
But What happens when the machine where the microservice is running
fails ?
https://docs.microsoft.com/en-us/dotnet/architecture/microservices/implement-resilient-applications/
handle-partial-failure
For instance, a single microservice can fail or might not be available to
respond for a short time. Since clients and services are separate processes, a
service might not be able to respond in a timely way to a client’s request.
The service might be overloaded and responding very slowly to requests or
might simply not be accessible for a short time because of network issues.
How do you handle the complexity that comes with
distributed cloud-native microservices?
Microservices Resilience
Microservice should design for resiliency. A microservice needs to be
resilient to failures and must accept partial failures. We should Design
microservices to be resilient for these partial failures. microservices should
ability to recover from failures and continue to function.
https://docs.microsoft.com/en-us/dotnet/architecture/cloud-native/application-resiliency-patterns
We should accepting failures and responding to them without any downtime
or data loss. The main purpose of resiliency microservices is to return the
application to a fully functioning state after a failure.
So when we are architecting distributed cloud applications, we should
assume that failures will happen and design our microservices for resiliency.
We accept that Microservices are going to fail at some point, thats why we
need to learn embracing failures.
Microservices Observability
Also we should have a strategy for monitoring and managing the complex
dependencies on microservices. That means we need to implement
microservices observability with using distributed logging features.
https://docs.microsoft.com/en-us/dotnet/architecture/cloud-native/observability-patterns
Microservices Observability gives us greater operational insight and leading
to understand incidents on our microservices architecture.
Microservices Monitoring
Also we should have microservices monitoring with health monitoring,
Health monitoring is critical to multiple aspects of operating microservices.
By this way, we can understand for a particular microservice is alive and
ready to accommodate requests. We can also provide health information to
our orchestrator’s cluster, so that the cluster can act accordingly. For
example Kubernetes has Liveness and Readiness probes that we can address
health check urls.
https://docs.microsoft.com/en-us/dotnet/architecture/cloud-native/observability-patterns
That make a good health reporting which customized for our microservices
like adding sub health checks for underlying database connection, and by
this way we can detect and fix issues for our running application much more
easily.
So, we will separate our Microservices Cross-Cutting Concerns in 4 main
pillars;
• Microservices Observability with Distributed Logging
• Microservices Resilience and Fault Tolerance with applying Retry and
Circuit-Breaker patterns using Polly
• Microservices Monitoring with Health Checks using WatchDog
• Microservices Tracing with OpenTelemetry using Zipkin
We will use all steps with applying cross-cutting concerns for the next
articles ->
• 1- Microservices Observability with Distributed Logging using
ElasticSearch and Kibana
Step by Step Development w/ Udemy Course
Get Udemy Course with discounted — Microservices Observability,
Sign up
Open in app
Search
Microservices Observability with
Distributed Logging using
ElasticSearch and Kibana
Mehmet Ozkaya · Follow
Published in aspnetrun · 11 min read · Apr 20, 2021
121
1
In this article, we are going to Developing “Microservices Observability with
Distributed Logging using ElasticSearch and Kibana”.
Write
Sign in
https://docs.microsoft.com/en-us/dotnet/architecture/cloud-native/observability-patterns
Microservice architecture have become the new model for building modern
cloud-native applications. And microservices-based applications are
distributed systems.
How do you handle the complexity that comes with cloud and
microservices? — Observability
Microservice should have a strategy for monitoring and managing the
complex dependencies on microservices. That means we need to implement
microservices observability with using distributed logging features.
Microservices Observability gives us greater operational insight and leading
to understand incidents on our microservices architecture.
Let’s check our big picture and see what we are going to build one by one.
As you can see that, we are in here and start to developing “Microservices
Observability with Distributed Logging”.
We are going to cover the;
• Start with Asp.Net logging basics and understand how to logging works
on asp.net
• Applying Elastic Stack which includes Elasticsearh + Logstach + Kibana
• Use SeriLog nuget package for creating structured logs in .net
microservices
• Use docker-compose Kibana image from docker hub and feed Kibana
with elastic stack
• That means we Containerize ElasticSearch and Kibana on our docker
environment with using Docker Compose
So in this article, we are going to Develop our “Microservices Observability
with Distributed Logging implementing ElasticStack on Asp.Net
Microservices”. By the end of the article, we will have ElasticSearch and
Kibana docker images and feeding this systems from our Asp.net
microservices with creating structure logs using SeriLog nuget package.
Background
This is the introduction of the series. This will be the series of articles. You
can follow the series with below links.
• 0- Microservices Observability, Resilience, Monitoring on .Net
• 1- Microservices Observability with Distributed Logging using
ElasticSearch and Kibana
• 2- Microservices Resilience and Fault Tolerance with applying Retry and
Circuit-Breaker patterns using Polly
• 3- Microservices Monitoring with Health Checks using WatchDog
We will focus on microservices cross-cutting concerns on these article
series.
Step by Step Development w/ Udemy Course
Get Udemy Course with discounted — Microservices Observability,
Resilience, Monitoring on .Net.
Source Code
Get the Source Code from AspnetRun Microservices Github — Clone or fork
this repository, if you like don’t forget the star. If you find or ask anything you
can directly open issue on repository.
What is Elastic Search ?
e have said that, we should have a strategy for monitoring and managing the
complex dependencies on microservices. That means we need to implement
microservices observability with using distributed logging features.
Elastic Search is one of the best tools for distributed logging on
microservices architectures.
Official — ElasticSearch
Elasticsearch is a distributed, open source search and analytics engine for all
types of data, including textual, numerical, geospatial, structured, and
unstructured. Elasticsearch is built on Apache Lucene and was first released
in 2010 by Elasticsearch N.V. (now known as Elastic)
So we can say that; Elasticsearch is the preferred full-text search search
engine in processes such as content search, data analysis, queries and
suggestions, especially due to its performance capabilities, powerful and
flexible features. It is developed in Java and is based on Lucene.
In other words, Elasticsearch is an open source database that is well suited
for indexing logs and analytical data.
Why is ElasticSearch so popular?
Besides being a requirement for almost every application, ElasticSearch
solves a number of problems and does it really well:
• Free and open source
Basic features are mostly free. If you need security and alert features with
Kibana, then it is required to purchase commercial pack.
• RESTful API
ElasticSearch has a RESTful API. Query results are returned in JSON
format, which means working with results is easy. Querying and adding
data through the RESTful API means it is easy to use any programming
language to work with ElasticSearch.
• Easy To Query
ElasticSearch has a built-in full-text search engine based on Apache
Lucene. It is very easy to writing queries on Elastic Search, no required
advanced programming knowledge.
• It’s very fast
Elasticsearch is fast. Because Elasticsearch engine based on Lucene, it
excels at full-text search.
Elasticsearch is also a near real-time search platform, meaning the its
property of high speed and high availability, it is also a popular tool to
save big data today.
• Scalable
It is easy to scale. it’s open source and containerized on docker hub and
that means it means it’s easy work on docker containers.
• Easy to Install
There is docker images on DockerHub for ElasticSearch and Kibana
containers. So that means we can easly add these services into our
docker-compose files and you are ready to start logging and searching.
What is Kibana ?
Kibana is an open source data visualization and user interface for
Elasticsearch. We can think of Elasticsearch as a database and Kibana as the
web user interface that we can use to create graphs, queries, indexes in
Elasticsearch.
It provides search and visualization capabilities for data indexes in the
ElasticSearch. You can query the data on Elasticsearch and create graphics.
Also you can see the image,
• Logstash is collecting logs and transform
• Elasticsearch is searching and analysing logs
• Kibana is visualize ana manage logs
Why logging with ElasticSearch and Kibana?
In microservices applications, logging is critical to implement in order to
identify problems in distributed architecture. ElasticSearch makes any kind
of logging easy, accessible and searchable.
When using ElasticSearch, It makes logging easily accessible
and searchable using a simple query language coupled with Kibana
interface.
ElasticSearch’s incredible speed and simple query language coupled with
Kibana’s interface.
What is Serilog?
Serilog is a library for ASP.NET that makes logging easy. There are several
Sinks available for Serilog, for example, File, SQL and Elasticsearch Sinks.
Adding ElasticSearch and Kibana image into Docker-Compose
File for Multi-Container Docker Environment
we are going to add ElasticSearch and Kibana image into Docker-Compose
File for Multi-Container Docker Environment.
Implementing Centralized Distributed Logging for Microservices
Elastic Stack (ELK) E-elasticsearch, L-logstash (log shipping), K-kibana
SeriLog -send logs to ElasticSearch
Kibana — query logs — data visualization
Enrich Logs
Before we start coding in .NET Core, it’s important to first spin up the
Elasticsearch and Kibana containers.
The easiest way to spin up these containers is to updating our dockercompose.yml file.
Before we start, we should check ElasticSearch and Kibana images from
docker hub.
Docker compose setup ElasticSearch and Kibana
https://hub.docker.com/_/elasticsearch/
https://hub.docker.com/_/kibana
Now we can add ElasticSearch and Kibana image into our dockercompose.yml files
docker-compose.yml
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.9.2
kibana:
image: docker.elastic.co/kibana/kibana:7.9.2
volumes:
mongo_data:
portainer_data:
postgres_data:
pgadmin_data:
elasticsearch-data: — — — — — — → ADDED
We have added 2 image, 1 volume
docker-compose.override.yml
elasticsearch:
container_name: elasticsearch
environment:
— xpack.monitoring.enabled=true
— xpack.watcher.enabled=false
— “ES_JAVA_OPTS=-Xms512m -Xmx512m”
— discovery.type=single-node
ports:
— “9200:9200”
volumes:
— elasticsearch-data:/usr/share/elasticsearch/data
kibana:
container_name: kibana
environment:
— ELASTICSEARCH_URL=http://localhost:9200
depends_on:
— elasticsearch
ports:
— “5601:5601”
We have added elasticsearch and kibana
We set elasticsearch and kibana configuration as per environment variables.
Finally, we can create ElasticSearch and Kibana image for Implementing
Centralized Distributed Logging for Microservices.
Open In Terminal
RUN with below command on that location;
docker-compose -f docker-compose.yml -f docker-compose.override.yml
up -d
docker-compose -f docker-compose.yml -f docker-compose.override.yml
down
This will pull images for elasticsearch and kibana.
The first time you run the docker-compose command, it will download the
images for ElasticSearch and Kibana from the docker registry,
so it might take a few minutes depending on your connection speed.
Once you’ve run the docker-compose up command, check that ElasticSearch
and Kibana are up and running.
RUN on DOCKER
Verify that Elasticsearch is up and running
Elasticsearch is up and running
http://localhost:9200
Check the indexes
http://localhost:9200/_aliases
So right now there is no any indexes defined in elastic search. I am not going
to create indexes or data, because we have already generated logs and logs
comes from different microservices, those are will be indexes and data for
elasticsearh.
But its good to know, you can create data with sending POST request to
elastic search. For example you can send set of json product data to this url
and elastic search will cover data.
http://localhost:9200/products/_bulk
http://localhost:9200/products/_search
But these are not our topic for now we are focusing on distributed logging
feature on microservices.
Verify that Kibana is up and running
Navigate to http://localhost:5601 to ensure Kibana is up and running
Kibana is up and running
http://localhost:5601
Normally it takes some time to run Kibana, we should wait. But If you get
error , you should INCREASE DOCKER RAM.
DOCKER SETTINGS
RESOURCES
MEMORY
INCREASE 2 -> 4 GB
As you can see that, we have successfully added ElasticSearch and Kibana
image into Docker-Compose File for Multi-Container Docker Environment.
Install and Configure SeriLog For ElasticSearch and Kibana Sink
Integration
We are going to Install and Configure SeriLog For ElasticSearch and Kibana
Sink Integration.
Let’s develop Serilog in AspnetRunBasics Shopping Web Application and test
it.
Go to
AspnetRunBasics
Install Packages
Install-Package Serilog.AspNetCore
Install-Package Serilog.Enrichers.Environment
Install-Package Serilog.Sinks.Elasticsearch
Check item group project file;
<ItemGroup>
<PackageReference
Include=”Microsoft.VisualStudio.Azure.Containers.Tools.Targets”
Version=”1.10.9" />
<PackageReference Include=”Serilog.AspNetCore” Version=”3.4.0" />
<PackageReference Include=”Serilog.Enrichers.Environment”
Version=”2.1.3" />
<PackageReference Include=”Serilog.Sinks.Elasticsearch”
Version=”8.4.1" />
</ItemGroup>
After the related packages are installed, we organize the logging section in
our appsettings.json file according to the serilog section that we will use:
Change the Logging Settings on appsettings.json file
Existing
{
“ApiSettings”: {
“GatewayAddress”: “http://localhost:8010"
},
“Logging”: {
“LogLevel”: {
“Default”: “Information”,
“Microsoft”: “Warning”,
“Microsoft.Hosting.Lifetime”: “Information”,
“AspnetRunBasics”: “Debug”
}
},
“AllowedHosts”: “*”
}
New One :
{
“ApiSettings”: {
“GatewayAddress”: “http://localhost:8010"
},
“Serilog”: {
“MinimumLevel”: {
“Default”: “Information”,
“Override”: {
“Microsoft”: “Information”,
“System”: “Warning”
}
}
},
“ElasticConfiguration”: {
“Uri”: “http://localhost:9200”
},
“AllowedHosts”: “*”
}
We have added the Serilog section and the section where our Elasticsearch
endpoint is located.
Remove the Logging section in appsettings.json and replace it with the
following configuration so that we can tell Serilog what the minimum log
level should be, and what url to use for logging to Elasticsearch.
Configure SeriLog
After that, we will configure logging in Program.cs by adding the following
details on Main method.
What we want to do is set up logging before creating the host. This way, if the
host does not start, we can log any errors. After that we will add the
ConfigureLogging and ElasticsearchSinkOptions methods to the Program.cs
file.
AspnetRunBasics
Program.cs — UseSerilog — — ADDED
public static IHostBuilder CreateHostBuilder(string[] args) =>
Host.CreateDefaultBuilder(args)
.UseSerilog((context, configuration) =>
{
configuration
.Enrich.FromLogContext()
.Enrich.WithMachineName()
.WriteTo.Console()
.WriteTo.Elasticsearch(
new ElasticsearchSinkOptions(new
Uri(context.Configuration[“ElasticConfiguration:Uri”]))
{
IndexFormat = $”applogs{Assembly.GetExecutingAssembly().GetName().Name.ToLower().Replace(“.”
, “-”)}{context.HostingEnvironment.EnvironmentName?.ToLower().Replace(“.”,
“-”)}-logs-{DateTime.UtcNow:yyyy-MM}”,
AutoRegisterTemplate = true,
NumberOfShards = 2,
NumberOfReplicas = 1
})
.Enrich.WithProperty(“Environment”,
context.HostingEnvironment.EnvironmentName)
.ReadFrom.Configuration(context.Configuration);
})
.ConfigureWebHostDefaults(webBuilder =>
{
webBuilder.UseStartup<Startup>();
});
We have configured Serilog according to writing Console and Elastic Search.
When Configure for the Elastic Search we provide some sink options.
As you can see that, we have Install and Configured SeriLog For
ElasticSearch and Kibana Sink Integration, next video we are going to test it.
Test SeriLog For ElasticSearch and Kibana Sink Integration in
Shopping Web Microservices
We are going to Test SeriLog For ElasticSearch and Kibana Sink Integration
in Shopping Web Microservices.
So far we have finished to development part of configurations, but Kibana
will not show any logs at the moment. We need to create an index for Kibana
to show your logs.
As a standard, I will create a pattern with ‘*’. We can give a specific pattern
here, but I found it appropriate to give a general pattern as an example.
Let’s continue the log process by showing how to create an example index
pattern on Kibana.
Before we start, let me Verify Docker compose worked :
docker-compose -f docker-compose.yml -f docker-compose.override.yml
up -d
Run Application
Set a Startup Project and Run
AspnetRunBasics
See logs are colorfully
[15:02:50 INF] Now listening on: http://localhost:5006
[15:02:50 INF] Application started. Press Ctrl+C to shut down.
[15:02:50 INF] Hosting environment: Development
[15:02:50 INF] Content root path: C:
\Users\ezozkme\source\repos\mutank\src\WebApps\AspnetRunBasics
[15:02:51 INF] Request starting HTTP/1.1 GET http://localhost:5006/
— [15:02:52 INF] Executing endpoint ‘/Index’
[15:02:52 INF] Route matched with {page = “/Index”}. Executing page
/Index
[15:02:52 INF] Executing handler method
AspnetRunBasics.Pages.IndexModel.OnGetAsync — ModelState is Valid
[15:02:57 INF] Executed handler method OnGetAsync, returned result
Microsoft.AspNetCore.Mvc.RazorPages.PageResult.
[15:02:57 INF] Executed page /Index in 5374.8521ms
[15:02:57 INF] Executed endpoint ‘/Index’
[15:02:57 INF] Request finished HTTP/1.1 GET http://localhost:5006/
— — — 200 — text/html;+charset=utf-8 5688.4302ms
[15:03:29 INF] Request starting HTTP/1.1 GET http://localhost:5006/
Product — [15:03:29 INF] Executing endpoint ‘/Product’
When we go to the Kibana interface with the localhost: 5601 url on the
browser, it will meet us with a screen as follows:
Elasticsearch is up and running
http://localhost:9200
Kibana is up and running
http://localhost:5601
http://localhost:5601/app/home#/
Create an Index Pattern in Kibana to Show Data
Kibana will not show the logs yet. Before we can view the logged data, we
must specify a index pattern.
In order to do this, click the Explore by myself link on the default Kibana
page
and then click the Discover link in the navigation section.
• Use Elasticsearch data
Connect to your Elasticsearch index
• Index patterns
Create Index Pattern
my index seems here
aspnetrunbasics-development-2021–02
Then, type in an index pattern. It will show the index pattern that was just
created. You can type in the entire index, or use wildcards.
put this
aspnetrunbasics-*
On the next page, select the @timestamp field as the time filter field name
and click the Create index pattern button.
• Next Step
timestamp
Create Index Pattern
See Fields
You can now view the logs by clicking the Discover link in the navigation
pane.
• Go to Main Menu
Discover
See Logs, as you can see that we can see logs on Kibana Dashboard.
Change LogInformation — add new customPropery
Now I am going to add new log into my application.
AspnetRunBasics.Services
CatalogService
public async Task<IEnumerable<CatalogModel>> GetCatalog()
{
_logger.LogInformation(“Getting Catalog Products from url : {url}
and custom property: {customPropery}”, _client.BaseAddress, 6);
Now that we’ve logged a message, refresh the application on http://localhost:
5000 again. Then search for the log message for customPropery text.
• Run Again
See
fields.customPropery — 6
• Open Kibana
Index Pattern
Refresh
• Now you can search by
fields.customPropery — 6
Go to Discover
fields.customPropery : 6
Search
As you can see that, we have Tested SeriLog For ElasticSearch and Kibana
Sink Integration in Shopping Web Microservices.
But this is only 1 microservices logs, but we have more that 10 microservices
need to centralized log in order to trace log exceptions.
Get Udemy Course with discounted — Microservices Observability,
Resilience, Monitoring on .Net.
For the next articles ->
• 2- Microservices Resilience and Fault Tolerance with applying Retry and
Circuit-Breaker patterns using Polly
Step by Step Development w/ Udemy Course
Sign up
Open in app
Search
Write
Microservices Resilience and Fault
Tolerance with applying Retry and
Circuit-Breaker patterns using Polly
Mehmet Ozkaya · Follow
Published in aspnetrun · 13 min read · Apr 20, 2021
44
In this article, we are going to Developing “Microservices Resilience and Fault
Tolerance with applying Retry and Circuit-Breaker patterns using Polly”.
Sign in
https://docs.microsoft.com/en-us/dotnet/architecture/cloud-native/application-resiliency-patterns
Microservice architecture have become the new model for building modern
cloud-native applications. And microservices-based applications are
distributed systems.
While architecting distributed microservices-based applications, it get lots of
benefits like makes easy to scale and manage services, but as the same time,
it is increasing interactions between those services have created a new set of
problems.
So we should assume that failures will happen, and we should dealing with
unexpected failures, especially in a distributed system. For example in case
of network or container failures, microservices must have a strategy to retry
requests again.
https://docs.microsoft.com/en-us/dotnet/architecture/microservices/implement-resilient-applications/
handle-partial-failure
But What happens when the machine where the microservice is running
fails?
For instance, a single microservice can fail or might not be available to
respond for a short time. Since clients and services are separate processes, a
service might not be able to respond in a timely way to a client’s request.
The service might be overloaded and responding very slowly to requests or
might simply not be accessible for a short time because of network issues.
How do you handle the complexity that comes with cloud and microservices?
— Microservices Resilience
Microservice should design for resiliency, A microservice needs to be
resilient to failures and must accept partial failures. We should Design
microservices to be resilient for these partial failures. They should ability to
recover from failures and continue to function.
We should accepting failures and responding to them without any downtime
or data loss. The main purpose of resiliency microservices is to return the
application to a fully functioning state after a failure.
So when we are architecting distributed cloud applications, we should
assume that failures will happen and design our microservices for resiliency.
We accept that Microservices are going to fail at some point, that's why we
need to learn embracing failures.
Let’s check our big picture and see what we are going to build one by one.
As you can see that, we are in here and start to developing “Microservices
Resilience and Fault Tolerance with applying Retry and Circuit-Breaker
patterns using Polly”.
We are going to cover the;
• Start with Microservices Resilience Patterns
• Definition of Retry pattern
• Definition of Circuit-Breaker patterns
• Definition of Bulkhead pattern
• Appling Retry patterns using Polly
• Appling Circuit-Breaker patterns using Polly
We will apply these microservices resilience patterns on our microservices
communications which specifically
• Shopping Client microservices
• Ocelot Api GW microservices
• Shopping.Aggregator microservices
These are communicating internal microservices in order to perform their
operations, so these communications should be resilient with applying retry
and circuit-breaker patterns.
Also Applying Retry pattern for Database migrate operations.
• When Discount microservices startup, we will create database, table and
seed table in PostreSQL database in order to perform migration. This will
retry by applying retry pattern with polly.
• When Ordering microservices startup, we will create database, table and
seed table in SQLServer database in order to perform migration. This will
retry by applying retry pattern with polly.
So in this article, we are going to Develop our “Microservices Resilience and
Fault Tolerance with applying Retry and Circuit-Breaker patterns using
Polly”.
By the end of the article, we will make stronger for our microservices
reference application with applying microservices resilience patterns.
Background
This is the introduction of the series. This will be the series of articles. You
can follow the series with below links.
• 0- Microservices Observability, Resilience, Monitoring on .Net
• 1- Microservices Observability with Distributed Logging using
ElasticSearch and Kibana
• 2- Microservices Resilience and Fault Tolerance with applying Retry and
Circuit-Breaker patterns using Polly
• 3- Microservices Monitoring with Health Checks using WatchDog
We will focus on microservices cross-cutting concerns on these article
series.
Step by Step Development w/ Udemy Course
Get Udemy Course with discounted — Microservices Observability,
Resilience, Monitoring on .Net.
Source Code
Get the Source Code from AspnetRun Microservices Github — Clone or fork
this repository, if you like don’t forget the star. If you find or ask anything you
can directly open issue on repository.
Microservices Resilience Patterns
In order to provide unbroken microservice, the architecture must be
designed correctly, and the developers must develop applications that are
suitable for this architecture and will not cause errors.
Unfortunately, we may not always achieve this, but fortunately there are
some approaches that we can apply to provide uninterrupted service, the
name of these approaches is “Resilience Pattern”.
Resilience Patterns can be divided into different categories according to the
problem area they solve, and it is possible to examine them under a separate
topics, but in this article, we will look at the most needed and essential
design patterns.
Ensuring the durability of services in microservice architecture is relatively
difficult compared to a monolithic architecture. In the microservice
architecture, the communication between services is distributed and many
internal or external network traffic is created for a transaction. As the
communication need between services and dependence on external services
increases, the possibility of occurring errors will increase.
Polly Policies
We will use Polly when implementing microservices resilience patterns.
Polly is a .NET resilience and transient-fault-handling library that allows
developers to express policies such as Retry, Circuit Breaker, Timeout,
Bulkhead Isolation, and Fallback in a fluent and thread-safe manner.
Go to;
https://github.com/App-vNext/Polly
You can check Resilience policies. Polly offers multiple resilience policies:
• Retry
Configures retry operations on designated operations
• Circuit Breaker
Blocks requested operations for a predefined period when faults exceed a
configured threshold.
• Timeout
Places limit on the duration for which a caller can wait for a response.
• Bulkhead
Constrains actions to fixed-size resource pool to prevent failing calls
from swamping a resource.
• Cache
Stores responses automatically.
• Fallback
Defines structured behavior upon a failure.
Retry Pattern
Microservices Inter-service communication can be performed by HTTP or
gRPC and can be managed by developments. But when in comes to network,
server, and such physical interruptions may be unavoidable.
If one of the services returns http 500 during the transaction,
the transaction may be interrupted and an error may be returned to the user,
but when the user restarts the same transaction, it may work.
In such a case, it would be more logical to repeat the request made to the 500
returned service, so that the roll-backs of the realized transactions will not
having to work, and the user will be able to perform the transaction
successfully, even if it is late.
So we can say that a distributed cloud-native environment,
microservices communications can fail because of transient failures,
but this failures happen in a short-time and can fix after that time.
For that cases, we should implementing retry pattern.
https://docs.microsoft.com/en-us/dotnet/architecture/cloud-native/application-resiliency-patterns
In the image, the client requested to api gateway in order to call internal
microservices. Api Gateway Service needs to access internal Microservice
and creates a request, so if internal microservice has failed for a short-time
and returns http 500 or gets an error, the transaction will be terminated,
But when the Retry design pattern is applied, the specified number of
transactions will be tried and may be performed successfully.
For this image, api gateway get 2 times 500 response from internal
microservice and finally the third call got succeed, so that means the
transaction completed successfully without termination.
In order to allow the microservice time to fix itself with self-correct,
it is important to extend the back-off time before retrying the call.
In Most usages the back-off period should be exponentially incremental
withdrawal to allow sufficient correction time.
Circuit Breaker Pattern
Circuit Breakers pattern, is a method in electronic circuits that is
constructed like circuit breaker switchgear, as the name suggested. Circuit
breakers stop the load transfer in case of a failure in the system in order to
protect the electronic circuit.
When the Circuit Breakers pattern is applied, it is constructed to include
communication between services. It monitors the communication between
the services and follows the errors that occur in the communication. An
example of this error is that an API end that has been requested returns an
HTTP 500 error code.
When the error in the system exceeds a certain threshold value,
Circuit Breakers turn on and cut off communication, and returning
previously determined error messages.
While Circuit Breakers is open, it continues to monitor the communication
traffic and if the requested service starts to return successful results, it
becomes closed.
If you look at the image, the circuit breaker has 3 basic modes.
https://www.gokhan-gokalp.com/en/resiliency-patterns-in-microservice-architecture/
Closed: In this mode, the circuit breaker is not open and all requests are
executed.
Open: the circuit breaker is open and it prevents the application from
repeatedly trying to execute an operation while an error occurs.
Half-Open: In this mode, the circuit breaker executes a few operations
to identify if an error still occurs. If errors occur, then the circuit breaker will
be opened, if not it will be closed.
https://docs.microsoft.com/en-us/dotnet/architecture/cloud-native/application-resiliency-patterns
In the image, the client requested to api gateway in order to call internal
microservices. And we have implemented retry pattern on this
communication. But what if internal microservice is totally down, and it
can’t be ready for a long time.
For that cases if we repeated retries on an unresponsive service,
this will consume resources such as memory, threads, and database
connections and cause irrelevant failures.
So for that reason, we have applied Circuit Breaker pattern over the retry
pattern. As you can see that, in here, after 100 failed requests in 30 seconds,
the circuit breakers opens and no longer allows calls to the internal
microservice from api gateway. That means circuit breaker pattern applied
in the api gateway when calling internal microservices. After 30 second, api
gateway try to call microservices again and If that call succeeds, the circuit
closes and the service is once again available to traffic.
The Circuit Breaker pattern is prevent broken communications from
repeatedly trying to send request that is mostly to fail. We set the pre-defined
number of failed calls and time period of this fails, if fails happen in that
time interval than it blocks all traffic to the service.
After that it is periodically try to call the target service in order to determine
its up or down. If the service call succeed than close the circuit and continue
to accept requests again.
https://www.gokhan-gokalp.com/en/resiliency-patterns-in-microservice-architecture/
Retry + Circuit Breaker Pattern
If we compare with the Retry pattern, Retry pattern only retry an operation
in the expectation calls since it will get succeed. But The Circuit Breaker
pattern is prevent broken communications from repeatedly trying to send
request that is mostly to fail.
So we are going to apply both Retry and Circuit-Breaker Pattern in our
microservices reference application with using Polly library.
https://github.com/App-vNext/Polly
We will examine examples of github readme file and develop for our
microservices.
For further explanations on this topic, you should check microsoft
architecture book which name is — Architecting Cloud Native .NET
Applications for Azure written from Rob Vettor and Steve Smith
Go to
https://docs.microsoft.com/en-us/dotnet/architecture/cloud-native/
If you come to topic of Cloud-native resiliency. You can get additional details
of this topic. Also our images comes from this book, this is free book from
Microsoft.
Bulkhead Pattern
The Bulkhead design pattern aims to isolate the occurring error which is in
one location does not affect other services by isolating the services.
The main purpose is isolated error place and secure the rest of services.
The inspiration for this design pattern is naval architecture. While ships or
submarines are being built, they are made not as a whole, but by shielding
from certain areas, so if there is happens a flood or fire, the relevant
compartment is closed and isolated and the ship / submarine can continue
its mission.
If we think of our microservices as a ship, we must be able to respond to user
requests by isolating the relevant service so the error that will occur in a
service does not affect other services.
https://speakerdeck.com/slok/resilience-patterns-server-edition?slide=33
See image here, we can better understand why we need the bulkhead design
pattern. In case of the http requests from users comes too high, physical
machines will not be able to process all requests at the same time and will
naturally put them in order.
Let’s assume that users make a request to service A, service A doesn’t
respond due to a software bug, and user requests start to accumulate in the
queue as shown in number two. After a certain time, the requests will start
to use all CPU threads of the machines and after a while, all new requests
will start to wait in the queue.
While all requests to service A that are not working are busy with resources,
requests to service B that are running successfully will continue to wait in
the queue, in this case, none of the demanding users will be satisfied.
In order to solve this problem, the bulkhead design pattern also suggests that
when a service is unresponsive, instead of using all resources to process
requests to that service, some of the resources are allocated in a separate
process and other requests are processed with the remaining server
resource.
To implement this design pattern, resource limits can be defined for the
application server where the services are running, for example, the
maximum number of threads and memory limit that the service can use.
Apply Retry Pattern with Polly policies on HttpClientFactory for
Shopping.Aggregator Microservices
We are going to Apply Retry Pattern with Polly policies on HttpClientFactory
for Shopping.Aggregator Microservices. As you remember that when we
developed Shopping.Aggregator microservices, this will consume internal
microservices with using HttpClientFactory and agregate Catalog, Basket
and Ordering microservices data in one web service.
If one of the internal microservices is temporary down or not accessible, we
will apply retry pattern with using Polly policies on HttpClientFactory for
Shopping.Aggregator Microservices.
So how Polly policies can apply retry pattern on HttpClientFactory ?
https://docs.microsoft.com/en-us/dotnet/architecture/microservices/implement-resilient-applications/use-
httpclientfactory-to-implement-resilient-http-requests
You can see in this picture, we are going to
• Use IHttpClientFactory to implement resilient HTTP requests
• Implement HTTP call retries with exponential backoff with
IHttpClientFactory and Polly policies
You can find the main article of this picture,
https://docs.microsoft.com/en-us/dotnet/architecture/microservices/
implement-resilient-applications/use-httpclientfactory-to-implementresilient-http-requests
Before we start, we should go to Shopping.Aggregator Microservices
In this microservices we Use IHttpClientFactory to implement resilient
HTTP requests.
We have Implemented Typed Client classes that use the injected and
configured HttpClient with IHttpClientFactory
Go to Shopping.Aggregator
Startup.cs — ConfigureServices
services.AddHttpClient<ICatalogService, CatalogService>(c =>
c.BaseAddress = new Uri(Configuration[“ApiSettings:CatalogUrl”]))
.AddHttpMessageHandler<LoggingDelegatingHandler>();
services.AddHttpClient<IBasketService, BasketService>(c =>
c.BaseAddress = new Uri(Configuration[“ApiSettings:BasketUrl”]))
.AddHttpMessageHandler<LoggingDelegatingHandler>();
services.AddHttpClient<IOrderService, OrderService>(c =>
c.BaseAddress = new Uri(Configuration[“ApiSettings:OrderingUrl”]))
.AddHttpMessageHandler<LoggingDelegatingHandler>();
We have 3 api connection with HttpClientFactory. So we are going to
Develop Policies for Retry and Circuit Breaker Pattern with Polly policies on
HttpClientFactory.
Let’s take an action.
Install Nuget Package
Install-Package Microsoft.Extensions.Http.Polly
See that
<PackageReference Include=”Microsoft.Extensions.Http.Polly”
Version=”5.0.1" />
Before we start, we should check Polly repository on github
Polly Policies :
https://github.com/App-vNext/Polly
// Retry a specified number of times, using a function to
// calculate the duration to wait between retries based on
// the current retry attempt (allows for exponential backoff)
// In this case will wait for
// 2 ^ 1 = 2 seconds then
// 2 ^ 2 = 4 seconds then
// 2 ^ 3 = 8 seconds then
// 2 ^ 4 = 16 seconds then
// 2 ^ 5 = 32 seconds
Policy
.Handle<SomeExceptionType>()
.WaitAndRetry(5, retryAttempt =>
TimeSpan.FromSeconds(Math.Pow(2, retryAttempt))
If you check Retry operation, you can see example of usage with custom wait
strategy.
Developing Retry and Circuit Breaker Pattern
Lets develop custom policy methods with detail policies.
Go to Shopping.Aggregator
Startup.cs
private static IAsyncPolicy<HttpResponseMessage> GetRetryPolicy()
{
// In this case will wait for
// 2 ^ 1 = 2 seconds then
// 2 ^ 2 = 4 seconds then
// 2 ^ 3 = 8 seconds then
// 2 ^ 4 = 16 seconds then
// 2 ^ 5 = 32 seconds
return HttpPolicyExtensions
.HandleTransientHttpError()
.WaitAndRetryAsync(
retryCount: 5,
sleepDurationProvider: retryAttempt =>
TimeSpan.FromSeconds(Math.Pow(2, retryAttempt)),
onRetry: (exception, retryCount, context) =>
{
Log.Error($”Retry {retryCount} of {context.PolicyKey} at
{context.OperationKey}, due to: {exception}.”);
});
}
private static IAsyncPolicy<HttpResponseMessage>
GetCircuitBreakerPolicy()
{
return HttpPolicyExtensions
.HandleTransientHttpError()
.CircuitBreakerAsync(
handledEventsAllowedBeforeBreaking: 5,
durationOfBreak: TimeSpan.FromSeconds(30)
);
}
After that, we should refactor our policies.
This time we will use “AddPolicyHandler” method.
Change ConfigureServices
Go to Shopping.Aggregator
Startup.cs — ConfigureServices
public void ConfigureServices(IServiceCollection services)
{
services.AddTransient<LoggingDelegatingHandler>();
services.AddHttpClient<ICatalogService, CatalogService>(c =>
c.BaseAddress = new Uri(Configuration[“ApiSettings:CatalogUrl”]))
.AddHttpMessageHandler<LoggingDelegatingHandler>()
.AddPolicyHandler(GetRetryPolicy()) — -> ADDED
.AddPolicyHandler(GetCircuitBreakerPolicy()); — -> ADDED
services.AddHttpClient<IBasketService, BasketService>(c =>
c.BaseAddress = new Uri(Configuration[“ApiSettings:BasketUrl”]))
.AddHttpMessageHandler<LoggingDelegatingHandler>()
.AddPolicyHandler(GetRetryPolicy()) — -> ADDED
.AddPolicyHandler(GetCircuitBreakerPolicy()); — -> ADDED
services.AddHttpClient<IOrderService, OrderService>(c =>
c.BaseAddress = new Uri(Configuration[“ApiSettings:OrderingUrl”]))
.AddHttpMessageHandler<LoggingDelegatingHandler>()
.AddPolicyHandler(GetRetryPolicy()) — -> ADDED
.AddPolicyHandler(GetCircuitBreakerPolicy()); —-> ADDED
TEST Polly :
Make sure that your docker microservices working fine.
Run Docker compose :
docker-compose -f docker-compose.yml -f docker-compose.override.yml
up -d
docker-compose -f docker-compose.yml -f docker-compose.override.yml
down
Now we can test application retry operations.
Set a Startup Project
Shopping.Aggregator
Change App url
“ApiSettings”: {
“CatalogUrl”: “http://localhost:8000",
“BasketUrl”: “http://localhost:8009",-- CHANGE 8001 to 8009
• Run Application
test with
swn
• See logs
As you can see that retry 3 times
As you can see that, we have Developed Policies for Retry and Circuit
Breaker Pattern with Polly policies on HttpClientFactory.
Get Udemy Course with discounted — Microservices Observability,
Resilience, Monitoring on .Net.
For the next articles ->
• 3- Microservices Monitoring with Health Checks using WatchDog
Step by Step Development w/ Udemy Course
Sign up
Open in app
Search
Write
Microservices Monitoring with
Health Checks using WatchDog
Mehmet Ozkaya · Follow
Published in aspnetrun · 9 min read · Apr 20, 2021
118
2
In this article, we are going to Developing Microservices Monitoring with Health
Checks using WatchDog. We will Use the HealthChecks feature in our back-end
ASP.NET microservices.
Sign in
https://docs.microsoft.com/en-us/dotnet/architecture/microservices/implement-resilient-applications/
monitor-app-health
Microservice architecture have become the new model for building modern
cloud-native applications. And microservices-based applications are
distributed systems.
While architecting distributed microservices-based applications, it get lots of
benefits like makes easy to scale and manage services, but as the same time,
it is increasing interactions between those services have created a new set of
problems.
So we should assume that failures will happen, and we should dealing with
unexpected failures, especially in a distributed system. For example in case
of network or container failures, microservices must have a strategy to retry
requests again.
How do you handle the complexity that comes with cloud and microservices?
— Health Monitoring
Microservice should design for monitoring with health monitoring, Health
monitoring is critical to multiple aspects of operating microservices.
By this way, we can understand for a particular microservice is alive and
ready to accommodate requests. We can also provide health information to
our orchestrator’s cluster, so that the cluster can act accordingly. For
example Kubernetes has Liveness and Readiness probes that we can address
health check urls.
That make a good health reporting which customized for our microservices
like adding sub health checks for underlying database connection,
and by this way we can detect and fix issues for your running application
much more easily.
You can see in first picture, for Ordering microservices health checks,
it has 3 sub health checks
• 1 for self ordering microservices
• 2 for underlying database connection,
• 3 for underlying rabbitmq connection
So since all sub conditions healthy, after that we can say ordering
microservices is ready to accommodate requests.
Let’s check our big picture and see what we are going to build one by one.
As you can see that, we are in here and start to developing “Microservices
Monitoring with Health Checks using WatchDog”.
We are going to cover the;
• Implement health checks in ASP.NET Core services
• Use the HealthChecks feature in your back-end ASP.NET microservices
• This will be the aspnet health check implementation with custom health
check methods which includes database availabilities — for example in
basket microservices, we will add sub-hc conditions for connecting Redis
and RabbitMQ.
• Query microservices to report about their health status
• Returning health status in JSON format
• Use Watchdogs which is separate service that can watch health and load
across microservices, and report health about the microservices by
querying with the HealthChecks methods.
We will apply these microservices Monitoring Health Checks for all
microservices starting with Catalog microservice and continue to add for all
microservices. After that we will create a new Microservices which will be
“WebStatus”. This will implement health check UI and listen all
microservices health checks and visualize with WatchDog. Lastly we
containerize for all microservices on docker environment and monitoring
healths with checking “WebStatus” microservice.
Background
This is the introduction of the series. This will be the series of articles. You
can follow the series with below links.
• 0- Microservices Observability, Resilience, Monitoring on .Net
• 1- Microservices Observability with Distributed Logging using
ElasticSearch and Kibana
• 2- Microservices Resilience and Fault Tolerance with applying Retry and
Circuit-Breaker patterns using Polly
• 3- Microservices Monitoring with Health Checks using WatchDog
We will focus on microservices cross-cutting concerns on these article
series.
Step by Step Development w/ Udemy Course
Get Udemy Course with discounted — Microservices Observability,
Resilience, Monitoring on .Net.
Source Code
Get the Source Code from AspnetRun Microservices Github — Clone or fork
this repository, if you like don’t forget the star. If you find or ask anything you
can directly open issue on repository.
Asp.Net Health Checks
When you are working with Microservices, Health monitoring is critical to
multiple aspects of operating microservices. Unhealthy services leads to a lot
of revenue loss, customer confidence, and negative effects on the business.
In order to avoid such situations, we should regularly monitor the status of
our apps.
Using these health checks, we can be notified to customers about outages
beforehand and reduce the impact of the outage by repairing it or notifying
the customer about the ongoing outage, leading to trust.
While a simple application may require monitoring of its dependencies once
a day, a critical application may need to be monitored as often as possible.
We are going to provide a status page to view results and add functionality to
inform developers about their issues.
We will use the health check API endpoint, which will return the instant
health state (Healthy / Unhealthy / Degraded) of each microservice in
response to us. This endpoint will perform the necessary health check
checks that we define.
WatchDog Health Dashboard
First, let’s try to explain the health status we mentioned above:
Healthy: All of our ingredients are healthy.
Unhealthy: There may be a problem with at least one of our components or
an unhandled exception may have been thrown during the health check.
Degraded: This indicates that the components are all healthy but the
controls are slow or unstable. We get a successful response with a simple
database query, but the execution time is higher than we expected.
Ok now we can talk about implementing basic health monitoring when
developing ASP.NET Core Microservices, you can use a built-in health
monitoring feature by using a nuget package
“Microsoft.Extension.Diagnostic.HealthCheck”.
These health monitoring features can be enabled by using a set of services
and middleware.
Go to ->
https://github.com/Xabaril/AspNetCore.Diagnostics.HealthChecks#uistorage-providers
See Health Checks
HealthChecks packages include health checks for:
Sql Server
MySql
Oracle
Sqlite
RavenDB
Postgres
EventStore
…
See HealthCheckUI
The project HealthChecks.UI is a minimal UI interface that stores and shows
the health checks results from the configured HealthChecks uris.
To integrate HealthChecks.UI in your project you just need to add the
HealthChecks.UI services and middlewares available in the package:
AspNetCore.HealthChecks.UI
So now we are ready to implement health checks.
Adding Health Check for Catalog.API Microservices with
Checking MongoDb Connection
Before we start, we should talk about the HealthChecks nuget packages,
We are not installment required package, it comes from aspnet framework
Nuget Package
Install-Package Microsoft.AspNetCore.Diagnostics.HealthChecks
After that, We are going to;
— Add Health Check
Startup.cs ->
ConfigureServices
services.AddHealthChecks();
Configure
app.UseEndpoints(endpoints =>
{
endpoints.MapControllers();
endpoints.MapHealthChecks(“/hc”); — → ADDED
});
Now we can
• TEST:
http://localhost:5000/hc
Healthy
But for Catalog.API project using mongodb and need to check mongodb
connection is ready or not. In case of Catalog.API is working but mongodb
container is not running, That means Catalog.API web application can be
worked but some of some components not working. In that cases we set
Degraded as a Health Check result.
In order to check mongo connection in HealthCheck
Nuget Package :
Install-Package AspNetCore.HealthChecks.MongoDb
Add new sub item
Startup.cs -> ConfigureServices
services.AddHealthChecks()
.AddMongoDb(Configuration[“DatabaseSettings:ConnectionString”],
“Catalog MongoDb Health”, HealthStatus.Degraded);
Now we can test Catalog hc with mongo db.
Stop mongo docker images and test again
docker-compose -f docker-compose.yml -f docker-compose.override.yml up d
docker-compose -f docker-compose.yml -f docker-compose.override.yml
down
• TEST:
http://localhost:5000/hc
Degraded
Start mongo docker images and test again
docker-compose -f docker-compose.yml -f docker-compose.override.yml up d
docker-compose -f docker-compose.yml -f docker-compose.override.yml
down
• TEST: — after run mongo
http://localhost:5000/hc
Healthy
Adding json response
Nuget Package
Install-Package AspNetCore.HealthChecks.UI.Client
Startup.cs -> Configure
app.UseEndpoints(endpoints =>
{
endpoints.MapControllers();
endpoints.MapHealthChecks(“/hc”, new HealthCheckOptions()
{
Predicate = _ => true,
ResponseWriter = UIResponseWriter.WriteHealthCheckUIResponse
});
});
TEST:
• Run the application
Change Run Profile to “Catalog.API” and Run the application
• http://localhost:5000/hc
JSON Response :
{“status”:”Healthy”,”totalDuration”:”00:00:00.0023926",”entries”:
{“MongoDb Health”:{“data”:
{},”duration”:”00:00:00.0021199",”status”:”Healthy”,”tags”:[]}}}
As you can see that, we have added Health Check for Catalog.API
Microservices with Checking MongoDb Connection.
For Basket Microservices, we should add Redis health check.
Add Redis HC
In order to check redis connection in HealthCheck
Nuget Packages :
Install-Package AspNetCore.HealthChecks.Redis
Startup.cs -> ConfigureServices
services.AddHealthChecks() — — — — — — — → ADDED
.AddRedis(Configuration[“CacheSettings:ConnectionString”], “Redis
Health”, HealthStatus.Degraded);
For Shopping.Aggregator and AspnetRunBasics microservices, we can add
UrlGroup in order check sub healths.
Add Uri Group HC for Shopping.Aggregator and AspnetRunBasics
Nuget Package :
Install-Package AspNetCore.HealthChecks.Uris
Shopping.Aggregator
Startup.cs -> ConfigureServices
services.AddHealthChecks()
.AddUrlGroup(new Uri($”{Configuration[“ApiSettings:CatalogUrl”]}/
swagger/index.html”), “Catalog.API”, HealthStatus.Degraded)
.AddUrlGroup(new Uri($”{Configuration[“ApiSettings:BasketUrl”]}/
swagger/index.html”), “Basket.API”, HealthStatus.Degraded)
.AddUrlGroup(new Uri($”{Configuration[“ApiSettings:OrderingUrl”]}/
swagger/index.html”), “Ordering.API”, HealthStatus.Degraded);
Developing WebStatus App for Centralized Microservices
Health Monitoring Using Watchdogs for Visualization
First of all, We are going to start with;
• Create New Project
AspNet MVC Project
Name
WebStatus
• Set Port
http://localhost:5007
• Set a Startup Porject
Select “WebStatus” Project
Run Application
After that, install required nuget packages
Add Nuget Package
Install-Package AspNetCore.HealthChecks.UI
Install-Package AspNetCore.HealthChecks.UI.InMemory.Storage
Manage Startup for Adding HealthChecksUI with Watchdog
Startup.cs
public void ConfigureServices(IServiceCollection services)
{
//…
// Registers required services for health checks
services.AddHealthChecksUI()
.AddInMemoryStorage();
}
//…
public void Configure(IApplicationBuilder app, IHostingEnvironment
env)
{
//…
app.UseEndpoints(endpoints =>
{
endpoints.MapHealthChecksUI(); — — — -> ADDED
//…
}
As you can see that, we have Developing WebStatus App for Centralized
Microservices Health Monitoring.
After adding packages and startup developments, We should configure our
Health Monitoring uris in order to check hc results
Configuration file for health check UI:
appsettings.json
“HealthChecks-UI”: {
“HealthChecks”: [
{
“Name”: “Catalog Health Check”,
“Uri”: “http://localhost:5000/hc”
},
{
“Name”: “Basket Health Check”,
“Uri”: “http://localhost:5001/hc”
}
]
},
For testing purpose, this time I am adding only Catalog and Basket hc.
But after test I will add for all microservices with docker urls.
Of course we need to
— Redirect Home Page to HC
HomeController
public IActionResult Index()
{
return Redirect(“/healthchecks-ui”); — — — → ADDED
//return View();
}
Default UI Path comes below library — https://github.com/Xabaril/
AspNetCore.Diagnostics.HealthChecks/blob/master/src/HealthChecks.UI/
Configuration/Options.cs
public string UIPath { get; set; } = “/healthchecks-ui”;
Now we can test the application
• Set Multiple Run Profile
Catalog.API
Basket.API
WebStatus
• Run Application :
localhost:5007/healthchecks-ui
See dashboard worked !
Adding All Microservices HC urls with docker uris
— Add All HC uri s into Configuration
appsettings.json file
“HealthChecks-UI”: {
“HealthChecks”: [
{
“Name”: “Catalog Health Check”,
“Uri”: “http://localhost:8000/hc"
},
{
“Name”: “Basket Health Check”,
“Uri”: “http://localhost:8001/hc"
},
{
“Name”: “Discount Health Check”,
“Uri”: “http://localhost:8002/hc"
},
{
“Name”: “Ordering Health Check”,
“Uri”: “http://localhost:8004/hc"
},
{
“Name”: “Shopping Aggregator Health Check”,
“Uri”: “http://localhost:8005/hc"
},
{
“Name”: “AspnetRunBasics WebMVC Health Check”,
“Uri”: “http://localhost:8006/hc"
}
],
We have configured for docker urls. But now that microservices docker
images not included hc developments. So We will test with building again
full images. Because we have added hc developments.
As you can see that, we have finished to Development of WebStatus App for
Centralized Microservices Health Monitoring Using Watchdogs for
Visualization.
Containerize WebStatus Health Monitoring Microservices using
Docker Compose
We are going to Containerize WebStatus Health Monitoring Microservices
using Docker Compose. We are going to add Docker-Compose File for
WebStatus Application Microservices Solution.
First, We should go WebStatus project. In order to create DockerFile and
docker compose file we use Visual Studio Container Orchestrator Support.
In the Discount.API project,
Right Click
choose Add > ..Container Orchestrator Support.
The Docker Support Options dialog appears.
Choose Docker Compose.
Choose Linux.
DockerFile and docker-compose created.
Visual Studio creates DockerFile and update the docker-compose.yml file
with newly added WebStatus images.
Examine Docker-Compose yaml file
We should Format docker-compose yaml files with tabs. These yaml files is
very strict to indentation of the commands.
docker-compose.yml
version: ‘3.4’
..
webstatus:
image: ${DOCKER_REGISTRY-}webstatus
build:
context: .
dockerfile: WebApps/WebStatus/Dockerfile
docker-compose.override.yml
version: ‘3.4’
…
webstatus:
container_name: webstatus
environment:
— ASPNETCORE_ENVIRONMENT=Development
— HealthChecksUI__HealthChecks__0__Name=Catalog Health Check
— HealthChecksUI__HealthChecks__0__Uri=http://catalog.api/hc
— HealthChecksUI__HealthChecks__1__Name=Basket Health Check
— HealthChecksUI__HealthChecks__1__Uri=http://basket.api/hc
— HealthChecksUI__HealthChecks__2__Name=Discount Health Check
— HealthChecksUI__HealthChecks__2__Uri=http://discount.api/hc
— HealthChecksUI__HealthChecks__3__Name=Ordering Health Check
— HealthChecksUI__HealthChecks__3__Uri=http://ordering.api/hc
— HealthChecksUI__HealthChecks__4__Name=Shopping Aggregator Health
Check
— HealthChecksUI__HealthChecks__4__Uri=http://shopping.aggregator/hc
— HealthChecksUI__HealthChecks__5__Name=AspnetRunBasics WebMVC
Health Check
— HealthChecksUI__HealthChecks__5__Uri=http://aspnetrunbasics/hc
ports:
— “8007:80”
Overriding existing appsettings on docker container. Overriding array
appsettings on Docker we used double __ characters.
You can see that we have override ocelot HealthChecks api url with docker
container names.
As a summary, we have created Dockerfile and docker-compose files for
WebStatus Health Monitoring Application configuration.
So now we can test all microservices with in docker-compose environment.
Test on Docker environment — WebStatus Health Monitoring
Microservices into Docker Compose for Visulize WatchDog HC
Close all dockers and run with below command on that location;
Start/Stop Docker compose :
docker-compose -f docker-compose.yml -f docker-compose.override.yml
up -d
docker-compose -f docker-compose.yml -f docker-compose.override.yml
down
• Check the WebStatus
• WebStatus — HC List
http://localhost:8007
http://host.docker.internal:8007
As you can see that, we have tested all microservices on docker environment
with heath check features on WebStatus Health Monitoring Microservices.
Step by Step Development w/ Udemy Course
Search
Securing Microservices with
IdentityServer4, OAuth2 and
OpenID Connect fronted by Ocelot
API Gateway
Mehmet Ozkaya · Follow
Published in aspnetrun · 25 min read · Oct 13, 2020
283
3
Write
Target Architecture of Identity Server with Microservices Reference Application
In this article, we’re going to learn how to secure microservices with using
standalone Identity Server 4 and backing with Ocelot API Gateway. We’re
going to protect our ASP.NET Web MVC and API applications with using
OAuth 2 and OpenID Connect in IdentityServer4. Securing your web
application and API with tokens, working with claims, authentication and
authorization middlewares and applying policies, and so on.
Step by Step Development w/ Course
I have just published a new course — Securing .NET 5 Microservices with
IdentityServer4 with OAuth2, OpenID Connect and Ocelot Api Gateway.
In the course, we are securing .Net 5 microservices with using standalone
Identity Server 4 and backing with Ocelot API Gateway. We’re going to
protect our ASP.NET Web MVC and API applications with using OAuth 2 and
OpenID Connect in IdentityServer4.
Source Code
Get the Source Code from AspnetRun Microservices Github — Clone or fork
this repository, if you like don’t forget the star. If you find or ask anything you
can directly open issue on repository.
Overall Picture
See the overall picture. You can see that we will have 4 microservices which
we are going to develop.
Movies.API
First of all, we are going to develop Movies.API project and protect this API
resources with IdentityServer4 OAuth 2.0 implementation. Generate JWT
Token with client_credentials from IdentityServer4 and will use this token
for securing Movies.API protected resources.
Movies.MVC
After that, we are going to develop Movies.MVC Asp.Net project for
Interactive Client of our application. This Interactive Movies.MVC Client
application will be secured with OpenID Connect in IdentityServer4. Our
client application pass credentials with logging to an Identity Server and
receive back a JSON Web Token (JWT).
Identity Server
Also, we are going to develop centralized standalone Authentication Server
and Identity Provider with implementing IdentityServer4 package and the
name of microservice is Identity Server.
Identity Server4 is an open source framework which implements OpenId
Connect and OAuth2 protocols for .Net Core.
With Identity Server, we can provide authentication and access control for
our web applications or Web APIs from a single point between applications
or on a user basis.
Ocelot API Gateway
Lastly, we are going to develop Ocelot API Gateway and make secure
protected API resources over the Ocelot API Gateway with transferring JWT
web tokens.
Once the client has a bearer token it will call the API endpoint which is
fronted by Ocelot. Ocelot is working as a reverse proxy.
After Ocelot reroutes the request to the internal API, it will present the token
to Identity Server in the authorization pipeline. If the client is authorized the
request will be processed and a list of movies will be sent back to the client.
Also over these picture, we have also apply the claim based authentications.
Background
You can follow the previous article which explains overall microservice
architecture of this example.
Check for the previous article which explained overall microservice
architecture of this repository.
Prerequisites
• Install the .NET 5 or above SDK
• Install Visual Studio 2019 v16.x or above
• Install Postman
Introduction
We will develop security operations with Identity Server integration for an
existing microservices reference application.
We had developed run-aspnetcore-microservices reference application
before this article. You can see the overall picture of the reference
microservices application.
Target Architecture of Identity Server with Microservices Reference Application
Now with the newly added red box in picture, we will extend this application
with IdentityServer OAuth 2.0 and OpenId Connect features.
By the end of this article, you’ll learn all about how to secure your ASP.NET
based microservices applications with IdentityServer4 using OAuth 2 and
OpenID Connect. And Also you’ll learn how to secure protected APIs backing
with Ocelot API Gateway in a microservices architecture.
Code Structure
Let’s check our project code structure on the visual studio solution explorer
window. You can see the 4 asp.net core microservices project which we had
see on the overall picture.
If we expand the projects, you will see that;
Movies.API is an asp.net core web api project which includes crud api
operations.
Movies.MVC is an asp.net core MVC web application project which
consumes Movies.API project and represent the data.
IdentityServer is a standalone Identity Provider for our architecture.
APIGateway is an api gateway between Movies.MVC and Movies.API.
Before we start we should learn the basics of terminology.
What is JWT (Token) ?
User authentication and authorization is very important while developing
our web projects. We use various methods to protect our application from
unauthorized persons and we provide access to it only for authorized users.
One of these solutions is to use tokens.
At this point, there are some industry standards. We are going to talk about
JSON Web Token (JWT).
JWT (JSON Web Tokens) is an RFC7519 industry standard. It is used for user
authentication and authorization on systems that communicate with each
other. JWT can be used for many issues such as user authentication, web
service security, and information security. JWT is a very popular and
preferred method when protecting our resources. If we want to access a
protected resource, the first thing we have to do is to retrieve a token.
JWT Example Scenario
The general scenario is:
The user sends a login request to the server with username and password.
The Authentication Server takes this information and queries it from the
database, combines the user name with its secret key and generate the
token.
When user authenticated, Authentication server returns token and the
server writes this information to the related user’s Token column in the
database.
The user passes JWT token when making API calls into header part of the
request and the server side checks each time whether the token is valid or
not from the database.
Application verifies and processes API call.
JWT(JSON Web Tokens) Structure
The token produced with JWT consists of 3 main parts encoded with Base64.
These are Header (Header), Payload (Data), Signature parts. Let’s take a
closer look at these parts. If we pay attention to the token example on the
side, there are 3 fields separated by dots in the token form.
Header
This part to be used in JWT is written in JSON format and consists of 2 fields.
These are the token type and the name of the algorithm to be used for
signing.
Payload (Data)
This section includes claims. The data kept in this section is unique between
the token client and server. This captured claim information also provides
this uniqueness.
Signature
This part is the last part of the token. Header, payload and secret key are
required to create this part. With the signature part, data integrity is
guaranteed.
What is OAuth2 ?
OAuth2 is an open authorization protocol used in data communication
between applications. Lets think that we have developed an authorized
application which includes OAuth2 support. So once we authorized the user
group then we can give access to the data source of another application with
the OAuth2 protocol.
For example, when a user connects or logs in to the web application you are
developing with facebook, google or github. OAuth2 is a protocol used for
user authorization, not for authentication. Applications such as Google,
Facebook, Twitter provide OAuth2 support and verified users in these
applications can be authorized to other developed web applications that use
and rely on the Oauth2 protocol.
OAuth2 is the industry-standard protocol for authorization. It delegates user
authentication to the service that hosts the user’s account and authorizes
third-party applications to access that account. In short, OAuth2 performs
the authorization process between applications.
There is industry standard documentation regarding OAuth 2.0 — RFC 6749
that helps us a lot to understand OAuth related topics.
OAuth2 Authorization Types and Flows
OAuth2 Grant Types or authorization flows determine the interaction
between a client application and token service. We have said that the OAuth2
protocol is a protocol that does not provide user authentication but has the
necessary support for it.
OAuth2 authorization types:
Authorization Code Grant
Implicit Grant
Resource Owner Password Credential Grant
Client Credential Grant
We can see roles included in OAuth2 flows with the image here;
Client Application: A client application is an application that accesses
protected data sources on behalf of the resource owner. (Web, Javascript,
mobile…)
Client applications are classified as public and confidential. Public client
applications; native applications (applications developed for desktop, tablet,
mobile, etc.), user-agent based applications (Javascript based or SPA
applications).
Resource Owner: The person (end user) or application that owns the data in
the system.
Authorization Server: Gives access tokens within the resource owner
authority for authenticated client applications. In short, it manages
authorization and access.
Resource Server: Allows access according to the access token and
authorization scope by managing the data source that needs to be protected.
The protected data source can be eg profile information or personal
information of the person.
What is OpenId Connect ?
OpenId 1.0 is a simple authentication layer built on the OAuth2 protocol in
fact OpenID Connect is an extension on top of OAuth 2.0. In addition to the
end user authentication by an Authorization Server for client applications, it
enables the end user to obtain simple profile information with a structure
similar to REST. OpenId Connect (OIDC) provides an API friendly structure
compared to previous versions. OIDC is an extension that extends the
OAuth2 Authorization Code Grant structure.
OIDC keeps transactions simple, built on OAuth2, and simplifies token (JWT)
usage. It can be integrated with OIDC web applications, mobile applications,
Single page applications (SPA) and server side applications.
OIDC is the industry-standard protocol for authentication.
There is great documentation regarding OIDC — Core as below link that
helps us a lot to understand OIDC related topics.
https://openid.net/specs/openid-connect-core-1_0.html
OpenID Connect Endoints
Authorization Endpoint: It is an endpoint defined in OAuth2. It is
responsible for the authentication and consent process of the end user.
Token Endpoint: Allows the exchange of a client application with the
authorization code or client Id and client secret and access token.
UserInfo Endpoint: It is an endpoint defined with OIDC. A client or resource
server is the point where additional claim requests are provided.
You can check the details of OpenId Connect Endpoints and other details in
below urls. I am sharing reference document links into video.
OpenID Connect Core 1.0 (spec)
OpenID Connect Discovery 1.0 (spec)
OpenID Connect Authentication Flows
As you know that OAuth2 has defining Authorization Grant and Extension
Grant (extension authorizations), so as the same way OIDC defines
authentication flows.
There are three authentication flows. These flows differ according to the
parameters passed to the OpenId Provider (OP), the content of the responses
and how they are processed. The difference between the requests made to
the Authorization Endpoint is determined by the response_type parameter.
Flows and parameter values;
Authorization Code Flow -> “code”
Implicit Flow -> “id_token” or“id_token token”
Hybrit Flow -> “ code id_token” or“code token” or “code id_token token”
When we see the code value in the response_type parameter, the
authorization endpoint always returns the authorization code.
If an id_token is included in a response_type, the id_token is included in the
response, or an access token will be included in the response from the
auhorization endpoint if the token is included in the response_type
parameter.
In short, this parameter determines the response structure and which
operation steps the client application will use.
For these three flows, the Authorization Code Flow is an extension of the
OAuth2 Authorization Code Grant.
OIDC Implicit Flow and Hybrid Flow are an extension of Authorization Code
Flow.
What is Identity Server 4 ?
Identity Server4 is an open source framework which implements OpenId
Connect and OAuth2 protocols for .Net Core.
With IdentityServer, we can provide authentication and access control for
our web applications or Web APIs from a single point between applications
or on a user basis.
IdentityServer determines how is your web or native client applications that
want to access Web Api or Api (Resource) in corporate applications or
modern web applications can be accessed using authentication and
authorization. So for this operations, there is no need to write identification
specific to the client application on the Web API side.
You can call this centralized security system as a security token service,
identity provider, authorization server, IP-STS and more.
As a summary IdentityServer4 provides that issues security tokens to clients.
IdentityServer has a number of jobs and features — including:
• Protect your resources
• Authenticate users using a local account store or via an external identity
provider
• Provide session management and single sign-on
• Manage and authenticate clients
• Issue identity and access tokens to clients
validate tokens
Identity Server 4 Terminologies
Basically, we can think of our structure as client applications (web, native,
mobile), data source applications (web api, service) and IdentityServer
application. You can see these system into image.
Client Applications
Client Applications are applications that want to access secure data sources
(Web Api) that users use. A client must be first registered with IdentityServer
before it can request tokens.
Examples for clients are web applications, native mobile or desktop
applications, SPAs, server processes etc.
Resources
Data Resources are the data we want to be protected by IdentityServer. Data
sources must have a unique name and the client that will use this resource
must access the resource with this name.
IdentityServer
IdentityServer that we can say OpenId Connect provider over the OAuth2
and OpenId protocols. In short, it provides security tokens to client
applications. IdentityServer protects data sources, ensures authentication of
users, provides single sign-on and session management, verifies client
applications, provides identity and access tokens to client applications and
checks the authenticity of these tokens.
Identity Token
Identity Token represents to the result of the authentication process. It
contains a sub-identifier for the user and information about how and when
the user will be authenticated. It also contains identity information.
Access Token
Access Token provides access to the data source (API). The client application
can access the data by sending a request to the data source with this token.
Access token contains client and user information. The API uses this
information to authorize and allow access to data.
Identity Server 4 in Microservices World
The security of the application we have developed that the protection and
management of the data used by the application are very important. With
the developing technology, the access of applications or devices in different
technologies to our data may have some security and software problems or
additional loads for us.
For example, let’s assume that the users of a SPA (Single Page App) Client
application that will use a Web API we have developed log into the system
with their username and password. Then, when another client web
application to our same Web API wants to verify with windows
authentication and enter, a native mobile client application wants to access
or another web API wants to access the Web API that we have developed, the
security supported by each client application technology, we may need to
develop or configure the security part of our application.
In such a case, you can see the image, we need a central (Authentication /
Authorization Server) application on the basis of client and resource (api)
dynamically for the access and security of client applications and data
sources for the same user groups.
In microservice architectures, authentication is handled centrally by nature.
So in this article, we are going to develop our protected api resource and
client web applications which consumes the apis. In order to secure our
application we will develop the Authentication Server which will be the
IdentityServer4. So by this way, the client is going to get the token from
Authentication Server and send this token when calling the protected api.
Also we will add an API gateway in this picture and protect our apis over the
ocelot api gateway.
Building API Resources — MOVIE.API ASP.NET WEB API
PROJECT
In this section, we are going to learn how to build Asp.Net Web API projects
as API Resources for our target architecture.
Let’s check our big picture of the architecture of what we are going to build
one by one.
As you can see that We are going to start with developing the Movies.API
project. This API project will have Movies data and performs CRUD
operations with exposing api methods for consuming from clients.
Our API project will manage movie records stored in a relational in-memory
database as described in the table above. These movies data will be our
protected resource and after developing this api project, we will protect our
movies resources with Identity Server.
We will use In-Memory Database when developing CRUD operations with
Entity Framework.
Create Asp.Net Core Web API Project For Movies.API Microservice
I am not going to develop all details in here, this will be the Asp.Net Core
Web API Project which implemented CRUD operations for Movies Entity
with Entity Framework In-Memory Database.
So you can check development of this project into github link of project;
mehmetozkaya/SecureResource
You can't perform that action at this time. You signed in with another
tab or window. You signed out in another tab or…
github.com
This is the Movies.API project as you can see in the below solution view of
project;
Building Identity Server 4
In this section, we are going to learn how to build Identity Server 4
Authentication Microservices for our target architecture.
Let’s check our big picture of the architecture of what we are going to build
one by one.
As you can see that we have developed Movies.API project in previous
section. So in this section, we are going to continue with developing the
Identity Server project.
This project will be Asp.Net Core Empty Web Application and will install
IdentityServer4 nuget package in order to be centralized standalone
Authentication Microservices.
Identity Server4 is an open source framework which implements OpenId
Connect and OAuth2 protocols for .Net Core.
With IdentityServer, we can provide authentication and access control for
our web applications or Web APIs from a single point between applications
or on a user basis.
Create Asp.Net Core Empty Project :
Right Click — Add new Empty Web App — HTTPS — n : IdentityServer
Right Click Manage Nuget Packages :
Browse IdentityServer
Install IdentityServer4
Go to Startup.cs and Configure Services
Register into DI
public void ConfigureServices(IServiceCollection services)
{
services.AddIdentityServer();
}
Add pipeline
app.UseRouting();
app.UseIdentityServer();
Configure IdentityServer4 with Clients, Resources, Scopes and
TestUsers
We are going to configure IdentityServer4 with Clients, Resources, Scopes
and TestUsers.
Identity Server required additional configurations to successfully run the
application.
Open Startup.cs
Extend Register AddIdentityServer method;
services.AddIdentityServer()
.AddInMemoryClients(new List<Client>())
.AddInMemoryIdentityResources(new List<IdentityResource>())
.AddInMemoryApiResources(new List<ApiResource>())
.AddInMemoryApiScopes(new List<ApiScope>())
.AddTestUsers(new List<TestUser>())
.AddDeveloperSigningCredential();
IdentityServer4 Clients, Resources, Scopes and TestUsers
We are using IdentityServer4 In-Memory Configuration. And for now, we are
giving the configurations as an empty list.
Let me briefly explain these configurations on Identity Server. First of all, we
don’t need to fill all the configurations, I put here in order to check all
configs.
ApiResources
ApiScopes
An API is a resource in your system that you want to protect. Resources or
Apis is the part where the data source or web service we want to protect is
defined.
So for that reason we have defined ApiResources and ApiScopes. Scopes
represent what a client application is allowed to do.
Clients
IdentityServer needs to know what client applications are allowed to use it.
Clients that need to access the Api resource are defined in the clients
section. Each client application is defined in the Client object on the
IdentityServer side. A list of applications that are allowed to use your system.
IdentityResources
IdentityResource is information that includes user information such as
userId, email, name, has a unique name and we can assign claim types
linked to them. Identity Resource information defined for a user is included
in the identity token. We can use Identity Resource that we defined with
scope parameter in client settings.
TestUsers
Test users that will use the client applications need to access the Apis. So for
client application we should also defined test users.
DeveloperSigningCredential
Creates temporary key material at startup time. This is for dev only scenarios
when you don’t have a certificate to use. The generated key will be persisted
to the file system so it stays stable between server restarts.
Adding Config Class for Clients, Resources, Scopes and TestUsers
We are going to add Config Class for Clients, Resources, Scopes and
TestUsers definitions on IdentityServer4.
First of all, we are going to do is create a new Config class. This class will
consist of different configurations related to Users, Clients,
IdentityResources, etc. So, let’s add them one by one.
Add Config.cs into IdentityServer project;
public class Config
{
public static IEnumerable<Client> Clients =>
new Client[]
{
};
public static IEnumerable<ApiScope> ApiScopes =>
new ApiScope[]
{
};
public static IEnumerable<ApiResource> ApiResources =>
new ApiResource[]
{
};
public static IEnumerable<IdentityResource> IdentityResources =>
new IdentityResource[]
{
};
public static List<TestUser> TestUsers =>
new List<TestUser>
{
};
}
Change Startup.cs DI Register;
services.AddIdentityServer()
.AddInMemoryClients(Config.Clients)
.AddInMemoryIdentityResources(Config.IdentityResources)
.AddInMemoryApiResources(Config.ApiResources)
.AddInMemoryApiScopes(Config.ApiScopes)
.AddTestUsers(Config.TestUsers)
.AddDeveloperSigningCredential();
Verify that discovery link is working
OpenID Connect Discovery Document
https://localhost:5005/.well-known/openid-configuration
Protecting Movie.API With Using Identity Server 4 JWT Token
In this section we are going to protect Movie.API resources with using
IdentityServer 4 and JWT Token.
Let’s check our big picture of the architecture of what we are going to build
one by one.
As you can see that, in the previous sections, we have developed Movies.API
and Identity Server project in previous section.
So in this section, we are going to protect this API resources with
IdentityServer4 OAuth 2.0 implementation. Generate JWT Token with
client_credentials from IdentityServer4 and will use this token for securing
Movies.API protected resources.
After generate a JWT Token, We can use this access token to access API,
protected by your implementation of IdentityServer.
We are going to configure our identity server project in order to protect our
movie api project.
We will define our ApiScopes and Clients into Config class and configure our
services.
First of all, we are going to add a new ApiScope which is Movie API in the
Authorization Server. So, let’s add that one.
Defining an API Scope in Config.cs;
public static IEnumerable<ApiScope> ApiScopes =>
new ApiScope[]
{
new ApiScope(“movieAPI”, “Movie API”)
};
Additionally, we should create a new client which will request to reach
protected movie.api resources.
Defining the client in Config.cs
public static IEnumerable<Client> Clients =>
new Client[]
{
new Client
{
ClientId = “movieClient”,
AllowedGrantTypes = GrantTypes.ClientCredentials,
ClientSecrets =
{
new Secret(“secret”.Sha256())
},
AllowedScopes = { “movieAPI” }
}
};
And finally, we should configure IdentityServer as per our protected
resource. That means we should summarize the identity server
configuration.
Configuring IdentityServer
public void ConfigureServices(IServiceCollection services)
{
services.AddIdentityServer()
.AddInMemoryClients(Config.Clients)
.AddInMemoryApiScopes(Config.ApiScopes)
.AddDeveloperSigningCredential();
}
Run application
Get Success
Verify that discovery link is working
OpenID Connect Discovery Document
https://localhost:5005/.well-known/openid-configuration
Check with https://jsonformatter.org/
As you can see that, we can inspect the moviesAPI into supported scopes as
well.
See below scopes “movieAPI”
“scopes_supported”: [
“movieAPI”,
“offline_access”
],
Get Token from Identity Server with client_credentials grant_type
We are going to Get the Token from Identity Server with client_credentials
grant_type.
First of all, we are going to run IdentityServer Application
Once we start our application, we are going to get information that our
server is up and running and using in-memory grant store in the console
window.
Run IdentityServer Application
See its working well with discovery link
Check Discovery document and see token_endpoint
“authorization_endpoint”: “https://localhost:5005/connect/authorize",
“token_endpoint”: “https://localhost:5005/connect/token",
Let’s try now to retrieve a token from our authorization server with a
Postman request:
Get token from Postman
Create New Request into Postman “IdentityServer” Folder into Collection
Add “GET Token” request
Set POST
POST https://localhost:5005/connect/token
Click “Body” Section
Pick “x-www-form-urlencoded”
Set below parameters:
grant_type = client_credentials
scope = movieAPI
client_id = movieClient
client_secret = secret
As you can see, we are using /connect/token endpoint to retrieve the token
from the server. For parameters, we provide client_id, client_secret,
client_credentials as a grant_type and scope. Once we press the Send
button, we are going to receive our token.
Get token response
{
“access_token”:
“eyJhbGciOiJSUzI1NiIsImtpZCI6IkIzQ0ZCMkE5Qjg2QzBDQjJDMDRDQjc2N0U0RUUy
MUQ2IiwidHlwIjoiYXQrand0In0.eyJuYmYiOjE1OTg2MTIzMDYsImV4cCI6MTU5ODYxN
TkwNiwiaXNzIjoiaHR0cHM6Ly9sb2NhbGhvc3Q6NTAwNSIsImNsaWVudF9pZCI6Im1vdm
llQ2xpZW50IiwianRpIjoiREI1OTk5OERBODYyRjU0MjgzNURBRDc4NDg3MTJCMEYiLCJ
pYXQiOjE1OTg2MTIzMDYsInNjb3BlIjpbIm1vdmllQVBJIl19.YL20fNhfy9hIXI0v1cc
9SIpNLH5ht7BNiLihLueZKtKibzYqFta9cmJw1z3nB1NMGl1IzhGH4PItn_KNWD8fHcU5
1S35IUcHOKnNKJ_upL8vpesTXWB4WAxdFJhF5c4nOvcGrStaWx6B1dymnNrPFJwWqW_Td
4xNCpmNswrSRwSKb0ObKFjJNlz2NnqbaCb-AvjwbDzFCLy2soc7GiilqdUVxevi5vfb1WuYCdzPB9y5OMflMP5Dyny1uA9d9m5LIw4gYlYx5KjQJOPBJ
qY3I95aMHXJYRv5N08q2sMlaI9aP_XQ5MMlwuWyKejJprLNFl4KMuw62M6d9Ug2TshkA”
,
“expires_in”: 3600,
“token_type”: “Bearer”,
“scope”: “movieAPI”
}
If you take this access token over to jwt.io, you can see that it contains the
following claims:
Go to https://jwt.io/
Paste token
See payload details
{
“nbf”: 1598612306,
“exp”: 1598615906,
“iss”: “https://localhost:5005",
“client_id”: “movieClient”,
“jti”: “DB59998DA862F542835DAD7848712B0F”,
“iat”: 1598612306,
“scope”: [
“movieAPI”
]
}
As you can see that we can successfully generate to token for protecting our
movie api with movieClient definition.
Securing Movies.API with JWT Bearer Token Authentication
First of all, we are going to add Nuget Dependency into Movies.API project.
Because we will protect this api project with jwt bearer token authentication.
Adding a Nuget Dependency into Movies.API
package Microsoft.AspNetCore.Authentication.JwtBearer
After that, in order to activate jwt bearer token athentication, we should
register JWT Authentication into asp.net core dependency injection method.
In the Startup.cs — ConfigureServices method, we should register and
configure jwt authentication with adding service collection.
Add the Authentication services to DI (dependency injection)
public void ConfigureServices(IServiceCollection services)
services.AddAuthentication(“Bearer”)
.AddJwtBearer(“Bearer”, options =>
{
options.Authority = “https://localhost:5005”;
options.TokenValidationParameters = new TokenValidationParameters
{
ValidateAudience = false
};
});
We use the AddAuthentication method to add authentication services to the
dependency-injection of our web api project. Moreover, we use the
AddJwtBearer method to configure support for our Authorization Server.
After that we should also add the authentication middleware into the aspnet
pipeline. Go to Startup.cs — Configure method and adding authentication
middleware;
public void Configure(IApplicationBuilder app, IWebHostEnvironment
env)
app.UseRouting();
app.UseAuthentication();
app.UseAuthorization();
And finally, we should Authorize the API Controller which is
MoviesController in Movies.API project. By this way we can protect our api
resources.
[Route(“api/[controller]”)]
[ApiController]
[Authorize]
public class MoviesController : ControllerBase
OpenID Connect with IS4 For Interactive MVC Movie Client
Microservice
In this section we are going to integrate IS project with the mvc client
application for performing interactive client operations. This section we are
going to focus on Movies.Client project in order to integrate with IS4 with
OpenID Connect.
Let’s check our big picture of the architecture of what we are going to build
one by one.
As you can see that, we have developed Identity Server and the Movies.Client
MVC Interactive Client Application. In order to make interactive client
operations we should integrate our MVC application with the centralized
Identity Server application. We are going to authorize our Movie page and
redirect for login operation to the IdentityServer UI Login page and after
success login it will get the token and redirect back to MVC application to
continue to its operations inside of the MVC application.
Add new Client into IS4 Config.cs
Our Client application Movie.Client MVC app was in https urls = https://
localhost:5002
Let’s add a client;
Adding new interactive client ; Config.cs
public static IEnumerable<Client> Clients =>
new Client[]
{
new Client
{
ClientId = “shopping_web”,
ClientName = “Shopping Web App”,
AllowedGrantTypes = GrantTypes.Code,
AllowRememberConsent = false,
RedirectUris = new List<string>()
{
“https://localhost:5003/signin-oidc” — this is client app port
},
PostLogoutRedirectUris = new List<string>()
{
“https://localhost:5003/signout-callback-oidc"
},
ClientSecrets = new List<Secret>
{
new Secret(“secret”.Sha256())
},
AllowedScopes = new List<string>
{
IdentityServerConstants.StandardScopes.OpenId,
IdentityServerConstants.StandardScopes.Profile
}
}
};
So, we provide the properties of client like ClientId, ClientName and the
ClientSecret for this client. So let me explain this client properties.
After that, we should Adding IdentityResources which information will be
shared when login;
public static IEnumerable<IdentityResource> IdentityResources =>
new IdentityResource[]
{
new IdentityResources.OpenId(),
new IdentityResources.Profile()
};
Finally, let’s add the users into the configuration Adding Test users which is
login;
public static List<TestUser> TestUsers =>
new List<TestUser>
{
new TestUser
{
SubjectId = “5BE86359–073C-434B-AD2D-A3932222DABE”,
Username = “mehmet”,
Password = “mehmet”,
Claims = new List<Claim>
{
new Claim(JwtClaimTypes.GivenName, “mehmet”),
new Claim(JwtClaimTypes.FamilyName, “ozkaya”)
}
}
};
Also we need to modify registering DI — InMemory Configuration
public void ConfigureServices(IServiceCollection services)
{
services.AddControllersWithViews();
services.AddIdentityServer()
.AddInMemoryClients(Config.Clients)
.AddInMemoryApiScopes(Config.ApiScopes)
.AddInMemoryIdentityResources(Config.IdentityResources)
.AddTestUsers(Config.TestUsers)
.AddDeveloperSigningCredential();
}
Build OpenId Connect Interactive Client for Movies.Client MVC
Application
We are going to configure OpenID Connect Authentication layer for our
Movies.Client MVC Application. So the idea is create authentication layer for
our MVC Client application. We are going to protect Movies page into MVC
application and users will login the system when try to reach Movies page. In
order to navigate users to IdentityServer Login page, we should configure
our mvc client application in order to be an interactive client for Identity
Server. As you remember we were defined our “shopping_web” client into
our Config class in IdentityServer project. So now we are going to configure
this client for authentication layer in MVC client application.
Go To Project -> Movies.Client
First of all we will start with adding OpenIdConnect nuget package;
ADD NUGET PACKAGE
dotnet add package Microsoft.AspNetCore.Authentication.OpenIdConnect
install and build the application.
Now we can configure our MVC application with OpenIdConnect
authentication. So let me write first and will explain later, configuring
OpenIdConnect authentication;
MVC APP CONFIGURE DI APP
public void ConfigureServices(IServiceCollection services)
services.AddAuthentication(options =>
{
options.DefaultScheme =
CookieAuthenticationDefaults.AuthenticationScheme;
options.DefaultChallengeScheme =
OpenIdConnectDefaults.AuthenticationScheme;
})
.AddCookie(CookieAuthenticationDefaults.AuthenticationScheme)
.AddOpenIdConnect(OpenIdConnectDefaults.AuthenticationScheme,
options =>
{
options.Authority = “https://localhost:5005”;
options.ClientId = “shopping_web”;
options.ClientSecret = “secret”;
options.ResponseType = “code”;
options.Scope.Add(“openid”);
options.Scope.Add(“profile”);
options.SaveTokens = true;
options.GetClaimsFromUserInfoEndpoint = true;
});
First of all we adding authentication services as usual with adding cookie
also. We set default authentication scheme for this service dependecies.
We register authentication as a service and populate the DefaultScheme and
DefaultChallengeScheme properties.
After that, We add the AddCookie method with the name of the
AuthenticationScheme, in order to register the cookie handler and the
cookie-based authentication for our Authentication Scheme.
And the most important part is adding AddOpenIdConnect method.
We call AddOpenIdConnect method in order to register and configure the
OIDC handler for our OpenID Connect integration.
The Authority property has the value of our IdentityServer address.
The ClientId and ClientSecret properties must be the same as the client id
and client secret from the Config class of our IdentityServer for this client.
The ResponseType property, We set it to the code value and therefore, we
expect the code to be returned from the /authorization endpoint.
We added OpenID and profile scopes for our options in order to access user
information.
The SaveTokens property to true to store the token after successful
authorization.
Hybrid Flow of Identity Server Secure Interactive MVC Client
(OpenID) and API Resources (OAuth) with Together
We are going to implement hybrid flow of OICD Authentication flows when
we want to get tokens over the front and back channels.
Let’s check our big picture of the architecture of what we are going to build
one by one.
As you can see that, we have developed Movies.Client MVC project for the
interactive client with OpenId Connect.
And also we have developed Movies.API project for protected resources with
OAuth 2.
So now we are going to combine these two operation and povide to security
in one hybrid flow.
Using Hybrid Flow For Getting Token and Combine Scopes for Interactive
Channel
First of all, we are going to change our flow to Hybrid flow. So we should
modify our GrantTypes into Config.cs.
Go To IDENTITYSERVER Project
Config.cs
Clients
ClientId = “movies_mvc_client”,
AllowedGrantTypes = GrantTypes.Code, — REMOVED
AllowedGrantTypes = GrantTypes.Hybrid, — ADDED
RequirePkce = false, — ADDED
After that, in the same place, we will extend our scopes with adding
“moiveAPI” into our Allowed Scopes. By this way we can access the movie
API with using only one token which getting with OpenId Connect
Interactive Client.
Go To IDENTITYSERVER Project
Config.cs
Add AllowedScopes of OpenID Connect
AllowedScopes = new List<string>
{
IdentityServerConstants.StandardScopes.OpenId,
IdentityServerConstants.StandardScopes.Profile,
“movieAPI” — ADDED
}
Check Startup.cs of IS4 that API resource or scopes added or not.
Go To IDENTITYSERVER Project
Startup.cs — ConfigureServices
services.AddIdentityServer()
.AddInMemoryClients(Config.Clients)
.AddInMemoryApiScopes(Config.ApiScopes)
And after finish the config operations in IS project, we can update
ResponseType of our Interactive MVC Client project in order to connect with
Hybrid flow.
Go to MVC.Client — Startup.cs — ConfigureServices which defined OpenID
connect and add api scope.
services.AddAuthentication(options =>
{
options.DefaultScheme =
CookieAuthenticationDefaults.AuthenticationScheme;
options.DefaultChallengeScheme =
OpenIdConnectDefaults.AuthenticationScheme;
})
.AddCookie(CookieAuthenticationDefaults.AuthenticationScheme)
.AddOpenIdConnect(OpenIdConnectDefaults.AuthenticationScheme,
options =>
{
options.Authority = “https://localhost:5005”;
options.ClientId = “movies_client”;
options.ClientSecret = “secret”;
— — options.ResponseType = “code”; — REMOVED
options.ResponseType = “code id_token”; — ADDED
We will use “code id_token” hybrid flow with giving the response type value.
Add Scope ;
options.Scope.Add(“openid”);
options.Scope.Add(“profile”);
options.Scope.Add(“movieAPI”); — ADDED !
In the same place, we should add “movieAPI” scope in order to access with
the same authentication access. By this way we can access api and identity
values with getting 1 token in a hybrid flow.
Ocelot API Gateway Implementation For Movies MVC Client to
Interact with Identity Server and Carry the Token
We are going to implement Ocelot API Gateway for Movies.MVC Client
Application to interact with Identity Server and carry the token.
Let’s check our big picture of the architecture of what we are going to build
one by one.
As you can see that, Ocelot API gateway staying in the middle of our
architecture.
We are going to develop Ocelot Api Gateway in the middle of our
architecture.
We are using Ocelot in here, for acting as a reverse proxy for a secured
internal ASP.NET Core Web API projects. So that’s why I am not going to give
deep information about Ocelot API Gateway.
It will open apis to MVC Movies.Client application and consumes the
protected movie.api resources with carrying token which retrieved from
Identity Server.
Securing Ocelot API Gateway with Bearer Token
We are going to Secure Ocelot API Gateway with Bearer Token. For proving
security in Ocelot, we are going to use AuthenticationOptions in to
ocelot.json configuration file.
First of all we are going to modify our configuration file.
Go to -> “ApiGateway”
Go to -> ocelot.json
Put below codes for authentication, carry to bearer token :
“AuthenticationOptions”: {
“AuthenticationProviderKey”: “IdentityApiKey”,
“AllowedScopes”: []
}
All parts :
{
“DownstreamPathTemplate”: “/api/movies”,
“DownstreamScheme”: “https”,
“DownstreamHostAndPorts”: [
{
“Host”: “localhost”,
“Port”: “5001”
}
],
“UpstreamPathTemplate”: “/movies”,
“UpstreamHttpMethod”: [ “GET”, “POST”, “PUT” ],
“AuthenticationOptions”: {
“AuthenticationProviderKey”: “IdentityApiKey”,
“AllowedScopes”: []
}
}
After that, we should provide JWT Bearer token configuration into Ocelot
application.
Go to -> “ApiGateway”
Startup.cs
Add below authentication layer:
public void ConfigureServices(IServiceCollection services)
{
var authenticationProviderKey = “IdentityApiKey”;
// NUGET — Microsoft.AspNetCore.Authentication.JwtBearer
services.AddAuthentication()
.AddJwtBearer(authenticationProviderKey, x =>
{
x.Authority = “https://localhost:5005”; // IDENTITY SERVER URL
//x.RequireHttpsMetadata = false;
x.TokenValidationParameters = new TokenValidationParameters
{
ValidateAudience = false
};
});
services.AddOcelot();
}
Secure Existing Microservices Application With adding Identity Server
Integration over the Ocelot and APIs.
We are going to Secure Existing Microservices Application With adding
Identity Server Integration over the Ocelot and APIs.
Let’s check our big picture of the existing microservices architecture. This is
basically an implementation of e-commerce domain in a microservices
architecture.
As you can see that, we have developed a reference microservices
architecture before this course. But the security part is missing. So we
should developing Identity microservices in this picture by applying this
article.
Before that, let me share the source code of an existing microservices
architecture. You can find the repository in below link, please star and fork
the repository and start working on it. Once you developed you can send me
pull request, I will evaluate and merge the codes if it is expected.
https://github.com/aspnetrun/run-aspnetcore-microservices
TODO List of Identity Microservices
Ok now we can talk about todo list.
In this assignment I have created this todo list which you can extend this list.
Of course start with the -> Download and Run the Final Reference
Microservice Application
• Create Identity Server Microservice into Reference Microservice
Application
• Add Configurations for Identity Server Microservice
• Create Clients, Identity Resources and Testusers
• Create User Interface for Identity Server
• Set Authentication for Shopping MVC Interaction Client
• Login the Application Securing with Identity Server
• Add Login / Logout Button into Shopping Web Client Application
I am not going to develop all details in here,
So you can check development of this project into github link of project;
aspnetrun/run-aspnet-identityserver4
You can't perform that action at this time. You signed in with another
tab or window. You signed out in another tab or…
github.com
Step by Step Development w/ Course
I have just published a new course — Securing .NET 5 Microservices with
IdentityServer4 with OAuth2, OpenID Connect and Ocelot Api Gateway.
In the course, we are securing .Net 5 microservices with using standalone
Identity Server 4 and backing with Ocelot API Gateway. We’re going to
protect our ASP.NET Web MVC and API applications with using OAuth 2 and
OpenID Connect in IdentityServer4.
Identityserver4
Oauth2
Openid Connect
Jwt
Microservices
Sign up
Open in app
Search
Using gRPC in Microservices for
Building a high-performance
Interservice Communication with
.Net 5
Mehmet Ozkaya · Follow
Published in aspnetrun · 34 min read · Nov 10, 2020
165
4
Write
Sign in
gRPC usage of Microservices Communication
In this article, we’re going to learn how to Build a Highly Performant Interservice Communication with gRPC for ASP NET 5 Microservices.
We will introduce gRPC as a modern high-performance RPC framework for
ASP.NET Core and for interservice communication.
gRPC uses HTTP/2 as base transport protocol and ProtoBuf encoding for
efficient and fast communication.
gRPC usage of Microservices
Microservices are modern distributed systems so with gRPC in ASP.NET 5,
we will develop high-performance, cross-platform applications for building
distributed systems and APIs.
It’s an ideal choice for communication between backend microservices,
internal network applications, or iot devices and services. With the release of
ASP.NET 5, Microsoft has added first-class support for creating gRPC
services with aspnet5.
This article will led you get started building, developing and managing gRPC
servers and clients on distributed microservices architecture.
Step by Step Development w/ Course
I have just published a new course — Using gRPC in Microservices
Communication with .Net 5.
In the course, we are going to build a high-performance gRPC Inter-Service
Communication between backend microservices with .Net 5 and Asp.Net5.
Source Code
Get the Source Code from AspnetRun Microservices Github — Clone or fork
this repository, if you like don’t forget the star :) If you find or ask anything
you can directly open issue on repository.
Overall Picture
See the overall picture. You can see that we will have 6 microservices which
we are going to develop.
We will use Worker Services and Asp.Net 5 Grpc applications to build client
and server gRPC components defining proto service definition contracts.
Basically we will implement e-commerce logic with only gRPC
communication. We will have 3 gRPC server applications which are Product
— ShoppingCart and Discount gRPC services. And we will have 2 worker
services which are Product and ShoppingCart Worker Service. Worker
services will be client and perform operations over the gRPC server
applications. And we will secure the gRPC services with standalone Identity
Server microservices with OAuth 2.0 and JWT token.
ProductGrpc Server Application
First of all, we are going to develop ProductGrpc project. This will be asp.net
gRPC server web application and expose apis for Product Crud operations.
Product Worker Service
After that, we are going to develop Product Worker Service project for
consuming ProductGrpc services. This product worker service project will
be the client of ProductGrpc application and generate products and insert
bulk product records into Product database by using client streaming gRPC
proto services of ProductGrpc application. This operation will be in a time
interval and looping as a service application.
ShoppingCartGrpc Server Application
After that, we are going to develop ShoppingCartGrpc project. This will be
asp.net gRPC server web application and expose apis for SC and SC items
operations. The grpc services will be create sc and add or remove item into
sc.
ShoppingCart Worker Service
After that, we are going to develop ShoppingCart Worker Service project for
consuming ShoppingCartGrpc services. This ShoppingCart worker service
project will be the client of both ProductGrpc and ShoppingCartGrpc
application. This worker service will read the products from ProductGrpc
and create sc and add product items into sc by using gRPC proto services of
ProductGrpc and ShoppingCartGrpc application. This operation will be in a
time interval and looping as a service application.
DiscountGrpc Server Application
When adding product item into SC, it will retrieve the discount value and
calculate the final price of product. This communication also will be gRPC
call with SCGrpc and DiscountGrpc application.
Identity Server
Also, we are going to develop centralized standalone Authentication Server
with implementing IdentityServer4 package and the name of microservice is
Identity Server.
Identity Server4 is an open source framework which implements OpenId
Connect and OAuth2 protocols for .Net Core.
With IdentityServer, we can provide protect our SC gRPC services with
OAuth 2.0 and JWT tokens. SC Worker will get the token before send request
to SC Grpc server application.
By the end of this article, you will have a practical understanding of how to use
gRPC to implement a fast and distributed microservices systems.
And Also you’ll learn how to secure protected grpc services with IdentityServer in a
microservices architecture.
Background
You can follow the previous article which explains overall microservice
architecture of this example.
Check for the previous article which explained overall microservice
architecture of this repository.
Prerequisites
• Install the .NET 5 or above SDK
• Install Visual Studio 2019 v16.x or above
Introduction
We will implement e-commerce logic with only gRPC communication. We
will have 3 gRPC server applications which are Product — ShoppingCart and
Discount gRPC services. And we will have 2 worker services which are
Product and ShoppingCart Worker Service. Worker services will be client
and perform operations over the gRPC server applications. And we will
secure the gRPC services with standalone Identity Server microservices with
OAuth 2.0 and JWT token.
Code Structure
Let’s check our project code structure on the visual studio solution explorer
window. You can see the 4 solution folder and inside of that folder you will
see Grpc server and client worker projects which we had see on the overall
picture.
If we expand the projects, you will see that;
Under Product folder; ProductGrpc is a gRPC aspnet application which
includes crud api operations.
ProductWorkerService is a WorkerService template application which
consumes and perform operations over the product grpc server application.
As the same way you can follow the ShoppingCart and Discount folders.
And also we have IdentityServer is a standalone Identity Provider for our
architecture.
Before we start we should learn the basics of terminology.
What is gRPC ?
gRPC (gRPC Remote Procedure Calls) is an open source remote procedure
call (RPC) system initially developed at Google.
gRPC is a framework to efficiently connect services and build distributed
systems.
It is focused on high performance and uses the HTTP/2 protocol to transport
binary messages. It is relies on the Protocol Buffers language to define
service contracts. Protocol Buffers, also known as Protobuf, allow you to
define the interface to be used in service to service communication
regardless of the programming language.
It generates cross-platform client and server bindings for many languages.
Most common usage scenarios include connecting services in microservices
style architecture and connect mobile devices, browser clients to backend
services.
The gRPC framework allows developers to create services that can
communicate with each other efficiently and independently from their
preferred programming language.
Once you define a contract with Protobuf, this contract can be used by each
service to automatically generate the code that sets up the communication
infrastructure.
This feature simplifies the creation of service interaction and, together with
high performance, makes gRPC the ideal framework for creating
microservices.
How gRPC works ?
In GRPC, a client application can directly call a method on a server
application on a different machine like it were a local object, making it easy
for you to build distributed applications and services.
As with many RPC systems, gRPC is based on the idea of defining a service
that specifies methods that can be called remotely with their parameters and
return types. On the server side, the server implements this interface and
runs a gRPC server to handle client calls. On the client side, the client has a
stub that provides the same methods as the server.
gRPC clients and servers can work and talk to each other in a different of
environments, from servers to your own desktop applications, and that can
be written in any language that gRPC supports. For example, you can easily
create a gRPC server in Java or c# with clients in Go, Python or Ruby.
Working with Protocol Buffers
gRPC uses Protocol Buffers by Default.
Protocol Buffers are Google’s open source mechanism for serializing
structured data.
When working with protocol buffers, the first step is to define the structure
of the data you want to serialize in a proto file: this is an ordinary text file
with the extension .proto.
The protocol buffer data is structured as messages where each message is a
small logical information record containing a series of name-value pairs
called fields.
Once you’ve determined your data structures, you use the protocol buffer
compiler protocol to create data access classes in the languages you prefer
from your protocol definition.
You can find the whole language guide into google’s offical documentation of
protocol buffer language. Let me add the link as below.
https://developers.google.com/protocol-buffers/docs/overview
gRPC Method Types — RPC life cycles
gRPC lets you define four kinds of service method:
Unary RPCs where the client sends a single request to the server and returns
a single response back, just like a normal function call.
Server streaming RPCs where the client sends a request to the server and
gets a stream to read a sequence of messages back. The client reads from the
returned stream until there are no more messages. gRPC guarantees
message ordering within an individual RPC call.
Client streaming RPCs where the client writes a sequence of messages and
sends them to the server, again using a provided stream. Once the client has
finished writing the messages, it waits for the server to read them and return
its response. Again gRPC guarantees message ordering within an individual
RPC call.
Bidirectional streaming RPCs where both sides send a sequence of messages
using a read-write stream. The two streams operate independently, so clients
and servers can read and write in whatever order they like: for example, the
server could wait to receive all the client messages before writing its
responses, or it could alternately read a message then write a message, or
some other combination of reads and writes.
gRPC Development Workflow
So far we had a good definitions of gRPC and proto buffer files. So now we
can summarize the development workflow of gRPC.
gRPC uses a contract-first approach to API development. Services and
messages are defined in *.proto files.
gRPC uses Protocol Buffers so we are start to developing protobuf file.
Protocol Buffers is a way to define the structure of data that you want to
serialize.
Once we define the structure of data in a file with .proto extension, we use
protoc compiler to generate data access classes in your preferred language(s)
from your proto definition.
This will generate the data access classes from your application. We choose
the C# client during the article.
Advantages of gRPC
General advantages of gRPC:
• Using HTTP / 2
These differences of HTTP / 2 provide 30–40% more performance. In
addition, since gRPC uses binary serialization, it needs both more
performance and less bandwidth than json serialization.
• Higher performance and less bandwidth usage than json with binary
serialization
• Supporting a wide audience with multi-language / platform support
• Open Source and the powerful comminity behind it
• Supports Bi-directional Streaming operations
• Support SSL / TLS usage
• Supports many Authentication methods
gRPC vs REST
gRPC is in an advantadge position against REST-based APIs that have become
popular in recent years. Because of the protobuf format, messages take up
less space and therefore communication is faster.
Unlike REST, gRPC works on a contract file basis, similar to SOAP.
Encoding and Decoding part of gRPC requests takes place on the client
machine. That’s why the JSON encode / decode you make for REST apis on
your machine is not a problem for you here.
You do not need to serialize (serialization / deserialization) for type
conversions between different languages because your data type is clear on
the contract and the code for your target language is generated from there.
gRPC usage of Microservices Communication
gRPC is primarily used with backend services.
But also gRPC using for the following scenarios:
• Synchronous backend microservice-to-microservice communication
where an immediate response is required to continue processing.
• Polyglot environments that need to support mixed programming
platforms.
• Low latency and high throughput communication where performance is
critical.
• Point-to-point real-time communication — gRPC can push messages in
real time without polling and has excellent support for bi-directional
streaming.
• Network constrained environments — binary gRPC messages are always
smaller than an equivalent text-based JSON message.
Example of gRPC in Microservices Communication
Think about that we have a Web-Marketting API Gateway and this will
forward to request to Shopping Aggregator Microservice.
This Shopping Aggregator Microservice receives a single request from a
client, dispatches it to various microservices, aggregates the results, and
sends them back to the requesting client. Such operations typically require
synchronous communication as to produce an immediate response.
In this example, backend calls from the Aggregator are performed using
gRPC.
gRPC communication requires both client and server components.
You can see that Shopping Aggregator implements a gRPC client.
The client makes synchronous gRPC calls to backend microservices, this
backend microservices are implement a gRPC server.
As you can see that, The gRPC endpoints must be configured for the HTTP/2
protocol that is required for gRPC communication.
In microservices world, most of communication use asynchronous
communication patterns but some operations require direct calls. gRPC
should be the primary choice for direct synchronous communication
between microservices. Its high-performance communication protocol,
based on HTTP/2 and protocol buffers, make it a perfect choice.
gRPC with .NET
gRPC support in .NET is one of the best implementations along the other
languages.
Last year Microsoft contributed a new implementation of gRPC for .NET to
the Cloud Native Computing Foundation — CNCF. Built on top of Kestrel and
HttpClient, gRPC for .NET makes gRPC a first-class member of the .NET
ecosystem.
gRPC is integrated into .NET Core 3.0 SDK and later.
The SDK includes tooling for endpoint routing, built-in IoC, and logging. The
open-source Kestrel web server supports HTTP/2 connections.
Also a Visual Studio 2019 template that scaffolds a skeleton project for a
gRPC service. Note how .NET Core fully supports Windows, Linux, and
macOS.
Both the client and server take advantage of the built-in gRPC generated
code from the .NET Core SDK. Client-side stubs provide the plumbing to
invoke remote gRPC calls. Server-side components provide gRPC plumbing
that custom service classes can inherit and consume.
gRPC performance in .NET 5
gRPC and .NET 5 are really fast.
In a community run benchmark of different gRPC server implementations,
.NET gets the highest requests per second after Rust, and is just ahead of C++
and Go.
You can see the image that comparison of performance with other
languages.
This result builds on top of the work done in .NET 5. The benchmarks show
.NET 5 server performance is 60% faster than .NET Core 3.1. .NET 5 client
performance is 230% faster than .NET Core 3.1.
Performance is a very important feature when it comes to communications
on scaling cloud applications. So with .Net and gRPC is being very good
alternative to implement backend microservices with .net and communicate
with gRPC.
gRPC performance improvements in .NET 5 | ASP.NET Blog
James gRPC is a modern open source remote procedure call
framework. There are many exciting features in gRPC: real-time…
devblogs.microsoft.com
HelloWorld gRPC with Asp.Net 5
In this section, we are going to learn how to build gRPC in Asp.Net 5 projects
with developing HelloWorld API operations.
We will starting with creating empty web application and develop gRPC
implementation with step by step.
We are going to cover the;
• Developing hello.proto Protocol Buffer File (protobuf file) for gRPC
Contract-First API Development
• Implementing gRPC Service Class which Inherits from gRPC generated
service
• Configure gRPC Service with Registering Asp.Net Dependecy Injection
and Configure with Mapping GrpcService in Asp.Net Middleware
• Run the Application as exposing gRPC Services
• Create GrpcHelloWorldClient Client Application for gRPC Server
• Consume Grpc HelloService API From Client Console Application with
GrpcChannel
• Scaffolding gRPC Server with gRPC Template of Visual Studio
Create Asp.Net Core Empty Web Project For HelloWorld Grpc
First, we are going to create;
Create Asp.Net Core Web Empty Project :
Right Click Solution — Add new Web Empty — HTTPS — https required for
http2 tls protocol
— Solution name : GrpcHelloWorld
— Project name : GrpcHelloWorldServer
After that, the first step we will start with adding Grpc.AspNet Nuget
packages in our project.
Add Nuget Package
Grpc.AspNetCore → this includes tools etc packages..
We can continue with the most important part of any gRPC project. Let’s
Create “Protos” folder and under the Protos folder, add new protobuf file
which name is hello.proto :
Create “Protos” folder
Add new protobuf file
hello.proto
gRPC uses a contract-first approach to API development. Services and
messages are defined in *.proto files.
Developing hello.proto Protocol Buffer File (protobuf file) for gRPC
Contract-First API Development
First, we are going to open hello.proto file. Because, gRPC uses a contractfirst approach to API development. Services and messages are defined in
*.proto files:
Develop hello.proto file; Let me write first and I will explain details after
that.
syntax = “proto3”;
option csharp_namespace = “GrpcHelloWorldServer”;
package helloworld;
service HelloService {
rpc SayHello (HelloRequest) returns (HelloResponse);
}
message HelloRequest {
string name = 1;
}
message HelloResponse {
string message = 1;
}
Ok now, let me explain this proto file;
First, The syntax statement tells the protobuf compiler what syntax we are
going to use. it’s important to specify which protobuf version we are using.
The second statement is optional and it tells the protobuf compiler to
generate C# classes within the specified namespace: GrpcHelloWorldServer.
After that, we have Defined a HelloService. The HelloService service defines
a SayHello call. SayHello sends a HelloRequest message and receives a
HelloResponse message.
So we are defining the HelloRequest and HelloResponse message types like
data transfer objects (DTOs). This messages and services will be generated
for accessing from our application.
After that, we should Add this proto file in our project file item groups in
order to attach and generate proto file c# codes.
— So Go to GrpcHelloWorldServer.csproj project file;
Edit the GrpcHelloWorldServer.csproj project file:
Right-click the project and select Edit Project File.
Add an item group with a <Protobuf> element that refers to the greet.proto
file:
<ItemGroup>
<Protobuf Include=”Protos\hello.proto” GrpcServices=”Server” />
</ItemGroup>
If not exist, make sure that it should be stored in project file. This will
provide to generate C# proto class when build the application.
Now we can build the project. When we build the project, it compiles the
proto file and it will generate c# proto class.
Right-click the project and Build the project
it compiles the proto file
Also we can check Properties of Proto file with F4
Build Action = Protobuf compiler
gRPC stub Classes = Server only
As you can see that, we set the “Build Action = Protobuf compiler”, that
means when building and compile the project, visual studio also build this
hello.proto file with the “Protobuf compiler”.
Also we set the “gRPC stub Classes = Server only”, that means when generate
the c# class, it will arrange the codes as a Server of gRPC Api project. So by
this way we can implement server logics with generated scaffolding c# proto
class.
Check the Generated Class
Click “Show All Files” of Solution Window
Go to -> obj-debug-netcoreapp-protos-> HelloGrpc.cs
See that our class is generated;
public static partial class HelloService
As you can see that, HelloService.cs class generated by visual studio and it
will provide to connect gRPC service as a server role. Now we can use this
class and implement our server logics with connectting gRPC API.
Implementing gRPC Service Class which Inherits from gRPC generated
service
In this part, we are going to implement gRPC Service Class which Inherits
from gRPC generated service class.
Let’s take an action.
First, we are going to create Service folder under our application.
After that we can;
Add Service Class into Service Folder
HelloWorldService
Inherit from gRPC generated service class
public class HelloWorldService : HelloService.HelloServiceBase
As you can see that we should inherit our Service class from generated gRPC
server class.
And we can use all features of asp.net in here like logging, configurations
and other dependency injections, so its so powerfull to use gRPC with .net.
We can use all features of aspnet in this class.
For example let’s add logger object.
public class HelloWorldService : HelloService.HelloServiceBase
{
private readonly ILogger<HelloWorldService> _logger;
public HelloWorldService(ILogger<HelloWorldService> logger)
{
_logger = logger;
}
Let me develop our gRPC service method which is SayHello. You can
implement the method with using “override” keyword.
public override Task<HelloResponse> SayHello(HelloRequest request,
ServerCallContext context)
{
string resultMessage = $”Hello {request.Name}”;
var response = new HelloResponse
{
Message = resultMessage
};
return Task.FromResult(response);
}
Once we generated proto file as a server gRPC stub Classes, we can override
actual operation inside the our service class with overriding the actual
service method from proto file.
As you can see that, in this method we get the HelloRequest message as a
parameter and return to “HelloResponse” message with adding “Hello”
keyword.
So that means we have implemented gRPC service method in our
application.
Configure gRPC Service with Registering Asp.Net Dependecy Injection
and Configure with Mapping GrpcService in Asp.Net Middleware
We are going to configure gRPC Service in our aspnet project. So we will
apply 2 steps;
1- Configure gRPC Service with Registering Asp.Net Dependecy Injection —
this is in Startup.ConfigureServices method
2- Configure with Mapping GrpcService in Asp.Net Middleware — this is in
Startup.Configure method.
First, open Startup.cs and locate the ConfigureServices method;
public void ConfigureServices(IServiceCollection services)
{
services.AddGrpc();
}
We added AddGrpc extention method, by this way, it will inject grpc related
classes for our application.
After that we should open gRPC api protocol for our application. In order to
do that we should Map gRPC service into our endpoints.
public void Configure(IApplicationBuilder app, IWebHostEnvironment
env)
app.UseEndpoints(endpoints =>
{
endpoints.MapGrpcService<HelloWorldService>();
As you can see that, we have configured our application as a grpc server. So
now we are ready to run application.
Run the application
Change Run Profile to “GRPCHelloWorld” and Run the application
See 5001 https port worked — https://localhost:5001/
See the Logs
info: Microsoft.Hosting.Lifetime[0]
Now listening on: https://localhost:5001
info: Microsoft.Hosting.Lifetime[0]
Application started. Press Ctrl+C to shut down.
info: Microsoft.Hosting.Lifetime[0]
Hosting environment: Development
info: Microsoft.Hosting.Lifetime[0]
Content root path: C:\Users\ezozkme\source\repos\grpc-
examples\section1\GrpcHelloWorld\GrpcHelloWorldServer
As you can see that port 5001 listening for gRPC calls. Now we can continue
to develop client application.
Building Product Grpc Microservices for Exposing Product
CRUD APIs
In this section, we are going to perform CRUD operations on Product Grpc
microservices. We will exposing apis over the http2 grpc protocol for
Product microservices. After that we will consume this grpc service methods
from client application so we will develop to client application. This
ProductGrpc microservices will be the first part of our big picture, so we will
extend microservices with gRPC protocol in order to provide inter-service
communication between microservices.
Big Picture
Let’s check out our big picture of the architecture of what we are going to
build one by one.
In this section, as you can see the selected box, we are going to build Product
Grpc Microservices for Exposing Product CRUD APIs with gRPC.
Let me give some brief information, We are going to;
• Create Product Grpc Microservices Project in Grpc Microservices
Solution
Set Product Grpc Microservices Database with Entity Framework InMemory Database in Code-First Approach
• Seeding In-Memory Database with Entity Framework Core for
ProductGrpc Microservices
• Developing product.proto ProtoBuf file for Exposing Crud Services in
Product Grpc Microservices
• Generate Proto Service Class from Product proto File in Product Grpc
Microservices
• Developing ProductService class to Implement Grpc Proto Service
Methods in Product Grpc Microservices
• Create Client Console Application for Consuming Product Grpc
Microservices
• Consume GetProductAsync Product Grpc Server Method from Client
Console Application
Let’s take an action.
Create Product Grpc Microservices Project in Grpc Microservices Solution
We are going to Create Product Grpc Microservices Project in Grpc
Microservices Visual Studio Solution.
First of all, we are going to create a new solution and project. So this is the
first step to start development of our big picture;
Create Asp.Net Grpc Template Project : — HTTPS — https required for http2
tls protocol
— Solution name : GrpcMicroservices
— Project name : ProductGrpc
Lets Quick check the files again;
This project generated by default template of gRPC project;
We can Check the files;
• Nuget Packages
• Protos Folder
• Services Folder
• Startup.cs
• appsettings.json
• launchSettings
And you can see the Run profile of project, they created custom project run
profile on https 5001 port.
So, as you can see that, grpc template handled all the things for us that we
can proceed with database development of ProductGrpc microservices.
Developing product.proto ProtoBuf file for Exposing Crud Services in
Product Grpc Microservices
We are going to develop “product.proto” ProtoBuf file for Exposing Crud
Services in Product Grpc Microservices.
Let’s take an action.
First of all, we are going to start with deleting Greet proto file and
GreetService. Of course we should delete from Startup endpoint middleware.
Delete greet.proto
Delete GreeterService
Comment on Startup
endpoints.MapGrpcService<ProductService>();
Now we can create “product.proto” file under the Protos folder.
Develop proto file
product.proto file;
syntax = “proto3”;
option csharp_namespace = “ProductGrpc.Protos”;
import “google/protobuf/timestamp.proto”;
import “google/protobuf/empty.proto”;
service ProductProtoService {
rpc GetProduct (GetProductRequest) returns (ProductModel);
rpc GetAllProducts (GetAllProductsRequest) returns (stream
ProductModel);
rpc AddProduct (AddProductRequest) returns (ProductModel);
rpc UpdateProduct (UpdateProductRequest) returns (ProductModel);
rpc DeleteProduct (DeleteProductRequest) returns
(DeleteProductResponse);
rpc InsertBulkProduct (stream ProductModel) returns
(InsertBulkProductResponse);
rpc Test (google.protobuf.Empty) returns (google.protobuf.Empty);
}
message GetProductRequest {
int32 productId = 1;
}
message GetAllProductsRequest{
}
message AddProductRequest {
ProductModel product = 1;
}
message UpdateProductRequest {
ProductModel product = 1;
}
message DeleteProductRequest {
int32 productId = 1;
}
message DeleteProductResponse {
bool success = 1;
}
message InsertBulkProductResponse {
bool success = 1;
int32 insertCount = 2;
}
message ProductModel{
int32 productId = 1;
string name = 2;
string description = 3;
float price = 4;
ProductStatus status = 5;
google.protobuf.Timestamp createdTime = 6;
}
enum ProductStatus {
INSTOCK = 0;
LOW = 1;
NONE = 2;
}
Ok now, let me explain this proto file;
First, The syntax statement tells the protobuf compiler what syntax we are
going to use. it’s important to specify which protobuf version we are using.
The second statement is optional and it tells the protobuf compiler to
generate C# classes within the specified namespace: ProductGrpc.Protos.
After that, we have Defined a ProductProtoService.
The ProductProtoService has crud methods which will be the gRPC sevices.
And along with this, it has Message classes.
We have defined generic model with ProductModel message type and use
this type as a response objects.
For example for rpc GetProduct method we used GetProductRequest
message as a request, and use ProductModel as a response.
rpc GetProduct (GetProductRequest) returns (ProductModel);
As the same way we have implemented Add/Update/Delete methods.
We have 1 server stream and 1 client stream methods which are
GetAllProducts and InsertBulkProduct.
• rpc GetAllProducts (GetAllProductsRequest) returns (stream
ProductModel);
• rpc InsertBulkProduct (stream ProductModel) returns
(InsertBulkProductResponse);
As you can see that we put stream keyword according to server or client part
of the message. We will see the implementation of this methods.
We also have enum type for ProductStatus, this will also support in proto
files and also generates in consume classes.
So we are defining the ProductProtoService and This messages and services
will be generated for accessing from our application. As you can see that, we
have developed our contract based product.proto protobuf file. Now we can
generate server codes from this file.
Developing ProductService class to Implement Grpc Proto Service
Methods in Product Grpc Microservices
We are going to develop ProductService class to Implement Grpc Proto
Service Methods in Product Grpc Microservices.
Let’s take an action.
First of all, we should create a new class :
— Class Name : ProductService.cs
After that we should inherit from the ProtoService class which is generated
from Visual Studio.
public class ProductService :
ProductProtoService.ProductProtoServiceBase
Ok, now we can start to implement our main methods of product.proto
contract file in the ProductGrpc microservices.
— Let me develop;
— GetProduct method;
public override async Task<ProductModel> GetProduct(GetProductRequest
request,
ServerCallContext context)
{
var product = await
_productDbContext.Product.FindAsync(request.ProductId);
if (product == null)
{
throw new RpcException(new Status(StatusCode.NotFound, $”Product
with ID={request.ProductId} is not found.”));
}
var productModel = _mapper.Map<ProductModel>(product);
return productModel;
}
In this method, we used _productDbContext in order to get product data
from database. And return productModel object which is our proto message
type class.
Also you can see that we should convert to Timestamp for datetime values.
Because Timestamp is google welknow types for the datetime objects, we
have to cast our datetime object to Timestamp by using FromDateTime
method.
CreatedTime = Timestamp.FromDateTime(product.CreateTime)
Finally, we should not forget to mapping our ProductService class as a gRPC
service in Startup class.
We are going to Register ProductService into aspnet pipeline to new grpc
service.
Go to Startup.cs — Configure
app.UseEndpoints(endpoints =>
{
endpoints.MapGrpcService<ProductService>();
I am not going to develop all details in here, this will be the Asp.Net Core
gRPC Project which implemented CRUD operations for Product Entity with
Entity Framework In-Memory Database.
So you can check development of this project into github link of project;
aspnetrun/run-aspnet-grpc
You can't perform that action at this time. You signed in with another
tab or window. You signed out in another tab or…
github.com
Building Product WorkerService for Generate and Insert
Product to ProductGrpc Microservices
In this section, we are going to build Product WorkerService for Generate
and Insert Product to ProductGrpc Microservices.
This application will generate products and add to products into Product db
with consuming product grpc services.
Big Picture
Let’s check out our big picture of the architecture of what we are going to
build one by one.
In this section, as you can see the selected box, we are going to build Product
WorkerService for Generate and Insert Product to ProductGrpc
Microservices.
Let me give some brief information, we are going to;
• Create Product Worker Service Project in Grpc Microservices Solution
• Add Connected Service Proto to Product Worker Service Project for
Consuming ProductGrpc Microservice
• Set Configuration with appsettings.json file into Product Worker Service
Project
• Consume Product Grpc Server Method From Product Worker Client
Application
• Focus on Big Picture and Product Worker Add Products to Product Grpc
Server
• Generate Products with ProductFactory class in Product Worker Service
Application
• Logging in Product Worker Service Client Application and Product Grpc
Server Application
Let’s take an action.
Create Product Worker Service Project in Grpc Microservices Solution
We are going to Create Product Worker Service Project in Grpc Microservices
Visual Studio Solution.
Worker service will work in a time interval and provide to load ProductGrpc
microservices.
First of all, we are going to create a new project. So we will follow our big
picture of Product microservices.
Create new “Worker Service” template :
— Solution name : GrpcMicroservices
— Project name : ProductWorkerService
Let me Explain worker project;
This project type is a typical microsoft service projects which provide to
work on background.
But it is fully integrated with .net environment like Background class
implementation, built-in dependency injection, Configuration , Logging and
so on.
Consume Product Grpc Server Method From Product Worker Client
Application
We are going to Consume Product Grpc Server Method From Product Worker
Client Application.
Let’s take an action.
First of all, we are going to open Worker.cs — ExecuteAsync method of
Worker application :
Go to Worker.cs — ExecuteAsync method
Develop consume method in here;
protected override async Task ExecuteAsync(CancellationToken
stoppingToken)
{
while (!stoppingToken.IsCancellationRequested)
{
_logger.LogInformation(“Worker running at: {time}”,
DateTimeOffset.Now);
try
{
using var channel =
GrpcChannel.ForAddress(_config.GetValue<string>(“WorkerService:Server
Url”));
var client = new
ProductProtoService.ProductProtoServiceClient(channel);
_logger.LogInformation(“AddProductAsync started..”);
var addProductResponse = await client.AddProductAsync(await
_factory.Generate());
_logger.LogInformation(“AddProduct Response: {product}”,
addProductResponse.ToString());
}
catch (Exception exception)
{
_logger.LogError(exception.Message);
throw exception;
}
await Task.Delay(_config.GetValue<int>(“WorkerService:TaskInterval”),
stoppingToken);
}
}
In this code, first of all we get the server url from the configuration and
create client object with using proto generated classes.
After that consume client GetProductAsync method gen get the product.
Building Shopping Cart Grpc Server Application for Storing
Products into Cart
In this section, we are going to build Shopping Cart Grpc Server Application
for Storing Products into Cart.
This application will the gRPC server application for Shopping Cart and the
items.
Big Picture
Let’s check out our big picture of the architecture of what we are going to
build one by one.
In this section, as you can see the selected box, we are going to build
Shopping Cart Grpc Server Application for Storing Products into Cart.
Let me give some brief information, We are going to;
• Create Shopping Cart Grpc Microservices Project in Grpc Microservices
Solution
• Set Shopping Cart Grpc Microservices Database with Entity Framework
In-Memory Database in Code-First Approach
• Seeding In-Memory Database with Entity Framework Core for
ShoppingCartGrpc Microservices
• Developing product.proto ProtoBuf file for Exposing Crud Services in
ShoppingCart Grpc Microservices
• Generate Proto Service Class from ShoppingCart proto File in
ShoppingCart Grpc Microservices
• Developing ShoppingCartService class to Implement Grpc Proto Service
Methods in ShoppingCart Grpc Microservices
• Implementing AutoMapper into ShoppingCartService Class of
ShoppingCart Grpc Microservices
• Developing AddItemIntoShoppingCart Client Stream Server Method in
the ShoppingCartService class
Let’s take an action.
Create Shopping Cart Grpc Microservices Project in Grpc Microservices
Solution
We are going to Create Shopping Cart Grpc Microservices Project in Grpc
Microservices Visual Studio Solution.
We are going to create a new Project for Shopping Cart Grpc Server
Application
Create Asp.Net Grpc Template Project : — HTTPS — https required for http2
tls protocol
— Solution name : GrpcMicroservices
— Project name : ShoppingCartGrpc
Developing product.proto ProtoBuf file for Exposing Crud Services in
ShoppingCart Grpc Microservices
we are going to develop “product.proto” ProtoBuf file for Exposing Crud
Services in ShoppingCart Grpc Microservices.
Now we can create “ShoppingCart.proto” file under the Protos folder.
Develop proto file
develop ShoppingCart.proto file
syntax = “proto3”;
option csharp_namespace = “ShoppingCartGrpc.Protos”;
service ShoppingCartProtoService {
rpc GetShoppingCart (GetShoppingCartRequest) returns
(ShoppingCartModel);
rpc CreateShoppingCart (ShoppingCartModel) returns
(ShoppingCartModel);
rpc AddItemIntoShoppingCart (stream AddItemIntoShoppingCartRequest)
returns (AddItemIntoShoppingCartResponse);
rpc RemoveItemIntoShoppingCart (RemoveItemIntoShoppingCartRequest)
returns (RemoveItemIntoShoppingCartResponse);
}
message GetShoppingCartRequest {
string username = 1;
}
message AddItemIntoShoppingCartRequest{
string username = 1;
string discountCode = 2;
ShoppingCartItemModel newCartItem = 3;
}
message AddItemIntoShoppingCartResponse{
bool success = 1;
int32 insertCount = 2;
}
message RemoveItemIntoShoppingCartRequest {
string username = 1;
ShoppingCartItemModel removeCartItem = 2;
}
message RemoveItemIntoShoppingCartResponse {
bool success = 1;
}
message ShoppingCartModel {
string username = 1;
repeated ShoppingCartItemModel cartItems = 2;
}
message ShoppingCartItemModel {
int32 quantity = 1;
string color = 2;
float price = 3;
int32 productId = 4;
string productname = 5;
}
First, The syntax statement tells the protobuf compiler what syntax we are
going to use. it’s important to specify which protobuf version we are using.
The second statement is optional and it tells the protobuf compiler to
generate C# classes within the specified namespace: ProductGrpc.Protos.
After that, we have Defined a ShoppingCartProtoService.
The ShoppingCartProtoService has get-create and add item into sc methods
which will be the gRPC sevices. And along with this, it has Message classes.
We have defined generic model with ShoppingCartModel and
ShoppingCartItemModel message type and use this type as a request and
response objects.
For example for rpc CreateShoppingCart method we used
ShoppingCartModel message as a request and response.
rpc CreateShoppingCart (ShoppingCartModel) returns (ShoppingCartModel);
As the same way we have implemented Add/Remove ItemIntoShoppingCart
methods.
We have 1 client stream methods which is AddItemIntoShoppingCart, this
will use when adding items into sc.
rpc AddItemIntoShoppingCart (stream AddItemIntoShoppingCartRequest)
returns (AddItemIntoShoppingCartResponse);
As you can see that we put stream keyword in the request part of the
message. We will see the implementation of this methods.
So we are defining the ShoppingCartProtoService and This messages and
services will be generated for accessing from our application.
Developing ShoppingCartService class to Implement Grpc Proto Service
Methods in ShoppingCart Grpc Microservices
We are going to develop ShoppingCartService class to Implement Grpc Proto
Service Methods in ShoppingCart Grpc Microservices.
First of all, we should create a new class : Under “Services” Folder :
— Class Name : ShoppingCartService.cs
Now we can ready to override proto methods,
— write “override” and see overridable methods.
public override async Task<ShoppingCartModel>
GetShoppingCart(GetShoppingCartRequest request, ServerCallContext
context)
{
var shoppingCart = await
_shoppingCartDbContext.ShoppingCart.FirstOrDefaultAsync(s =>
s.UserName == request.Username);
if (shoppingCart == null)
{
throw new RpcException(new Status(StatusCode.NotFound,
$”ShoppingCart with UserName={request.Username} is not found.”));
}
var shoppingCartModel = _mapper.Map<ShoppingCartModel>(shoppingCart);
return shoppingCartModel;
}
In this method, we used _productDbContext in order to get product data
from database. And return productModel object which is our proto message
type class.
But in order to create ShoppingCartModel, its good to use AutoMapper.
CreateShoppingCart method;
public override async Task<ShoppingCartModel>
CreateShoppingCart(ShoppingCartModel request, ServerCallContext
context)
{
var shoppingCart = _mapper.Map<ShoppingCart>(request);
var isExist = await _shoppingCartDbContext.ShoppingCart.AnyAsync(s =>
s.UserName == shoppingCart.UserName);
if (isExist)
{
_logger.LogError(“Invalid UserName for ShoppingCart creation.
UserName : {userName}”, shoppingCart.UserName);
throw new RpcException(new Status(StatusCode.NotFound,
$”ShoppingCart with UserName={request.Username} is already exist.”));
}
_shoppingCartDbContext.ShoppingCart.Add(shoppingCart);
await _shoppingCartDbContext.SaveChangesAsync();
_logger.LogInformation(“ShoppingCart is successfully created.UserName
: {userName}”, shoppingCart.UserName);
var shoppingCartModel = _mapper.Map<ShoppingCartModel>(shoppingCart);
return shoppingCartModel;
}
In this method, we used _shoppingCartDbContext in order to get sc data
from database. And return shoppingCartModel object which is our proto
message type class.
We checked the ShoppingCart with username, if already exist then return an
error. If not exist we have created new sc with no items and return to model
class.
Developing AddItemIntoShoppingCart Client Stream Server Method in the
ShoppingCartService class
We are going to develop AddItemIntoShoppingCart Client Stream Server
Method in the ShoppingCartService class.
This method will be client stream method that data comes from the
ShoppingCart Worker Client application in a stream format and this data will
insert shopping cart and items data in ShoppingCart Grpc Microservices
with stream format.
Now we can ready to override proto methods,
Ok, now we can start to implement our Client Stream Server Method which
is — AddItemIntoShoppingCart
— Let me develop;
Go to ShoppingCartService.cs
— AddItemIntoShoppingCart method;
[AllowAnonymous]
public override async Task<AddItemIntoShoppingCartResponse>
AddItemIntoShoppingCart(IAsyncStreamReader<AddItemIntoShoppingCartReq
uest> requestStream, ServerCallContext context)
{
while (await requestStream.MoveNext())
{
// Get sc if exist or not
// Check item if exist in sc or not
// if item exist +1 quantity
// if not exist add new item into sc
// check discount and set the item price
var shoppingCart = await
_shoppingCartDbContext.ShoppingCart.FirstOrDefaultAsync(s =>
s.UserName == requestStream.Current.Username);
if (shoppingCart == null)
{
throw new RpcException(new Status(StatusCode.NotFound,
$”ShoppingCart with UserName={requestStream.Current.Username} is not
found.”));
}
var newAddedCartItem =
_mapper.Map<ShoppingCartItem>(requestStream.Current.NewCartItem);
var cartItem = shoppingCart.Items.FirstOrDefault(i => i.ProductId ==
newAddedCartItem.ProductId);
if(null != cartItem)
{
cartItem.Quantity++;
}
else
{
// GRPC CALL DISCOUNT SERVICE — check discount and set the item
price
var discount = await
_discountService.GetDiscount(requestStream.Current.DiscountCode);
newAddedCartItem.Price -= discount.Amount;
shoppingCart.Items.Add(newAddedCartItem);
}
}
var insertCount = await _shoppingCartDbContext.SaveChangesAsync();
var response = new AddItemIntoShoppingCartResponse
{
Success = insertCount > 0,
InsertCount = insertCount
};
return response;
}
Basically, in this code, we get the request stream of shopping cart items and
add to ef.core _shoppingCartDbContext Product collection since the stream
is finished. We have iterated IAsyncStreamReader object.
— But in the while stream item, we have some logics that we have developed;
// Get sc if exist or not
// Check item if exist in sc or not
// if item exist +1 quantity
// if not exist add new item into sc
// check discount and set the item price
We used _shoppingCartDbContext in order to get sc data from database.
And return shoppingCartModel object which is our proto message type class.
We checked the ShoppingCart with username, if already exist then return an
error. If not exist we have created new sc with no items and return to model
class.
Then we we checked the sc item with productId, if product is already
existing then increase the quantity, if not exist then we should get the
discount value with consuming Discount Service with gRPC.
So we should also develop DiscountGrpc project as a server of Discounts. We
will communicate with Discount service over the snyc grpc call. I am not
going to develop in here but you can check the code of DiscountGrpc project,
it is similar to ProductGrpc service project. The idea is that to be owner of
Discount data and expose gRPC apis in order to access Discount data from
ShoppingCartGrpc project.
Building ShoppingCart WorkerService for Retrieve Products
and Add to ShoppingCart with consuming ProductGrpc and
ShoppingCartGrpc Microservices
We are going to build ShoppingCart WorkerService for Retrieve Products and
Add to ShoppingCart with consuming ProductGrpc and ShoppingCartGrpc
Microservices.
This application will includes all logics and consume both Product and
Shoppingcart in order to perform our business logic.
Big Picture
Let’s check out our big picture of the architecture of what we are going to
build one by one.
In this section, as you can see the selected box, we are going to build
ShoppingCart WorkerService for Retrieve Products and Add to ShoppingCart
with consuming ProductGrpc and ShoppingCartGrpc Microservices.
Let me give some brief information, We are going to;
• Create ShoppingCart Worker Service Project in Grpc Microservices
Solution
• Add Connected Services Proto to ShoppingCart Worker Service Project
for Consuming ProductGrpc and ShoppingCartGrpc Microservices
• Set Configuration with appsettings.json file into ShoppingCart Worker
Service Project
• Consume Product and ShoppingCart Grpc Server Method From
ShoppingCart Worker Client Application
• Focus on Big Picture — ShoppingCart Worker — Get Products with Server
Stream and Add Items to Shopping Cart with Client Stream
• Running All Grpc Server Microservices with Product and ShoppingCart
Worker Service
Let’s take an action.
Consume Product and ShoppingCart Grpc Server Method From
ShoppingCart Worker Client Application
we are going to Consume Product and ShoppingCart Grpc Server Method
From ShoppingCart Worker Client Application.
First of all, we are going to open Worker.cs — ExecuteAsync method of
Worker application :
Go to Worker.cs — ExecuteAsync method
Develop consume method in here;
//Create SC if not exist
//Retrieve products from product grpc with server stream
//Add sc items into SC with client stream
Worker.cs — ExecuteAsync
protected override async Task ExecuteAsync(CancellationToken
stoppingToken)
{
while (!stoppingToken.IsCancellationRequested)
{
_logger.LogInformation(“Worker running at: {time}”,
DateTimeOffset.Now);
//0 Get Token from IS4
//1 Create SC if not exist
//2 Retrieve products from product grpc with server stream
//3 Add sc items into SC with client stream
//0 Get Token from IS4
var token = await GetTokenFromIS4();
//1 Create SC if not exist
using var scChannel =
GrpcChannel.ForAddress(_config.GetValue<string>(“WorkerService:Shoppi
ngCartServerUrl”));
var scClient = new
ShoppingCartProtoService.ShoppingCartProtoServiceClient(scChannel);
var scModel = await GetOrCreateShoppingCartAsync(scClient, token);
// open sc client stream
using var scClientStream = scClient.AddItemIntoShoppingCart();
//2 Retrieve products from product grpc with server stream
using var productChannel =
GrpcChannel.ForAddress(_config.GetValue<string>(“WorkerService:Produc
tServerUrl”));
var productClient = new
ProductProtoService.ProductProtoServiceClient(productChannel);
_logger.LogInformation(“GetAllProducts started..”);
using var clientData = productClient.GetAllProducts(new
GetAllProductsRequest());
await foreach (var responseData in
clientData.ResponseStream.ReadAllAsync())
{
_logger.LogInformation(“GetAllProducts Stream Response:
{responseData}”, responseData);
//3 Add sc items into SC with client stream
var addNewScItem = new AddItemIntoShoppingCartRequest
{
Username = _config.GetValue<string>(“WorkerService:UserName”),
DiscountCode = “CODE_100”,
NewCartItem = new ShoppingCartItemModel
{
ProductId = responseData.ProductId,
Productname = responseData.Name,
Price = responseData.Price,
Color = “Black”,
Quantity = 1
}
};
await scClientStream.RequestStream.WriteAsync(addNewScItem);
_logger.LogInformation(“ShoppingCart Client Stream Added New Item :
{addNewScItem}”, addNewScItem);
}
await scClientStream.RequestStream.CompleteAsync();
var addItemIntoShoppingCartResponse = await scClientStream;
_logger.LogInformation(“AddItemIntoShoppingCart Client Stream
Response: {addItemIntoShoppingCartResponse}”,
addItemIntoShoppingCartResponse);
await Task.Delay(_config.GetValue<int>(“WorkerService:TaskInterval”),
stoppingToken);
}
}
In this code, first of all we get or create the shopping cart with sc grpc call
that can be add or remove items.
After that Retrieve products from product grpc with server stream.
Before reading products we open sc channel in order to ready for client
streaming.
And while reading product stream, performing Add sc items into SC
operation with client stream.
Of course we logged for all steps into this method.
As you can see that, we have developed our sc worker service application
with consuming both Product and SC Grpc server applications.
Authenticate gRPC Services with IdentityServer4 Protect
ShoppingCartGrpc Method with OAuth 2.0 and JWT Bearer
Token
We are going to Authenticate gRPC Services with IdentityServer4 Protect
ShoppingCartGrpc Method with OAuth 2.0 and JWT Bearer Token.
This application will be the security layer of our gRPC server applications.
Big Picture
Let’s check out our big picture of the architecture of what we are going to
build one by one.
In this section, as you can see the selected box, we are going to Authenticate
gRPC Services with IdentityServer4 Protect ShoppingCartGrpc Method with
OAuth 2.0 and JWT Bearer Token.
Let me give some brief information, We are going to;
• Building IdentityServer4 Authentication Microservices for Securing
ShoppingCartGrpc Server Application
• Configure IdentityServer4 with Adding Config Class for Clients,
Resources, Scopes and TestUsers
• Securing ShoppingCart Grpc Services with IdentityServer4 OAuth 2.0 and
JWT Bearer Token
• Testing to Access ShoppingCart Grpc Services without Token
• Get Token From IS4 and Make Grpc Call to ShoppingCart Grpc Services
with JWT Token Header from ShoppingCart Worker Service
• Set Token to Grpc Header when Call to ShoppingCart Grpc Services
• Run Entire Applications and See the Big Picture in Your Local
Let’s take an action.
Building and Configuring IdentityServer4 Authentication Microservices
for Securing ShoppingCartGrpc Server Application
We are going to build standalone IdentityServer4 Authentication
Microservices for Securing ShoppingCartGrpc Server Application.
This will be the Authentication Server that can be generate and validate the
jwt token.
First of all, we are going to create a new project. So we will follow our big
picture of IdentityServer microservices.
Create new Folder — “Authentication”
Create Asp.Net Core Empty Project :
Right Click — Add new Empty Web App — HTTPS — n : IdentityServer
Right Click Manage Nuget Packages :
Browse IdentityServer
Install IdentityServer4
Go to Startup.cs and Configure Services
Register into DI
public void ConfigureServices(IServiceCollection services)
{
services.AddIdentityServer();
}
Add pipeline
app.UseRouting();
app.UseIdentityServer();
Configure IdentityServer4 with Adding Config Class for Clients,
Resources, Scopes and TestUsers
We are going to add Config Class for Clients, Resources, Scopes and
TestUsers definitions on IdentityServer4.
First of all, we are going to do is create a new Config class. This class will
consist of different configurations related to Users, Clients,
IdentityResources, etc. So, let’s add them one by one.
public class Config
{
public static IEnumerable<Client> Clients =>
new Client[]
{
new Client
{
ClientId = “ShoppingCartClient”,
AllowedGrantTypes = GrantTypes.ClientCredentials,
ClientSecrets =
{
new Secret(“secret”.Sha256())
},
AllowedScopes = { “ShoppingCartAPI” }
}
};
public static IEnumerable<ApiScope> ApiScopes =>
new ApiScope[]
{
new ApiScope(“ShoppingCartAPI”, “Shopping Cart API”)
};
public static IEnumerable<ApiResource> ApiResources =>
new ApiResource[]
{
};
public static IEnumerable<IdentityResource> IdentityResources =>
new IdentityResource[]
{
};
public static List<TestUser> TestUsers =>
new List<TestUser>
{
};
}
And finally, we should configure IdentityServer as per our protected
resource. That means we should summarize the identity server
configuration.
Configuring IdentityServer
Startup.cs
public void ConfigureServices(IServiceCollection services)
{
services.AddIdentityServer()
.AddInMemoryClients(Config.Clients)
.AddInMemoryApiScopes(Config.ApiScopes)
.AddDeveloperSigningCredential();
}
Securing ShoppingCart Grpc Services with IdentityServer4 OAuth 2.0 and
JWT Bearer Token
We are going to Secure ShoppingCart Grpc Services with IdentityServer4
OAuth 2.0 and JWT Bearer Token.
First of all, we are going to locate “ShoppingCartGrpc” project again.
Go to “ShoppingCartGrpc”
Adding a Nuget Dependency into ShoppingCartGrpc
package Microsoft.AspNetCore.Authentication.JwtBearer
After that, in order to activate jwt bearer token athentication, we should
register JWT Authentication into asp.net core dependency injection method.
In the Startup.cs — ConfigureServices method, we should register and
configure jwt authentication with adding service collection.
Add the Authentication services to DI (dependency injection)
public void ConfigureServices(IServiceCollection services)
services.AddAuthentication(“Bearer”)
.AddJwtBearer(“Bearer”, options =>
{
options.Authority = “https://localhost:5005”;
options.TokenValidationParameters = new TokenValidationParameters
{
ValidateAudience = false
};
});
services.AddAuthorization();
We use the AddAuthentication method to add authentication services to the
dependency-injection of our web api project. Moreover, we use the
AddJwtBearer method to configure support for our Authorization Server.
In that method, we specify that
Authority — This address refers to our IdentityServer in order to use when
sending OpenID Connect calls
TokenValidationParameters — ValidateAudience is that there is a Audience
value for received OpenID Connect tokens, but we don’t require to check
validation.
And finally, we should;
Authorize the GRPC Service class which is ShoppingCartService in
ShoppingCartGrpc project. By this way we can protect our api resources.
[Authorize]
public class ShoppingCartService :
ShoppingCartProtoService.ShoppingCartProtoServiceBase
— So we have finished to development of protect ShoppingCartGrpc Services
with IS4 by getting and validating jwt token.
Get Token From IS4 and Make Grpc Call to ShoppingCart Grpc Services
with JWT Token Header from ShoppingCart Worker Service
We are going to Get Token From IS4 and Make Grpc Call to ShoppingCart
Grpc Services with JWT Token Header from ShoppingCart Worker Service.
So we should go to ShoppingCart Worker Service.
First of all, we are going to locate “ShoppingCartWorkerService” project
again.
Go to “ShoppingCartWorkerService”
Adding a Nuget Dependency into ShoppingCartWorkerService
Add the IdentityModel NuGet package to your client.
package IdentityModel
Add New Step into our logic and adding new method for getting token;
Go to Worker.cs — ExecuteAsync
++ //0 Get Token from IS4
//1 Create SC if not exist
//2 Retrieve products from product grpc with server stream
//3 Add sc items into SC with client stream
//0 Get Token from IS4
var token = await GetTokenFromIS4();
Let me develop the get token method;
private async Task<string> GetTokenFromIS4()
{
// discover endpoints from metadata
var client = new HttpClient();
var disco = await client.GetDiscoveryDocumentAsync(“https://
localhost:5005”);
if (disco.IsError)
{
Console.WriteLine(disco.Error);
return string.Empty;
}
// request token
var tokenResponse = await
client.RequestClientCredentialsTokenAsync(new
ClientCredentialsTokenRequest
{
Address = disco.TokenEndpoint,
ClientId = “ShoppingCartClient”,
ClientSecret = “secret”,
Scope = “ShoppingCartAPI”
});
if (tokenResponse.IsError)
{
Console.WriteLine(tokenResponse.Error);
return string.Empty;
}
return tokenResponse.AccessToken;
}
In this method, first of all, we get the discover endpoints from metadata with
giving IS4 server url in GetDiscoveryDocumentAsync method. This methods
comes from the IdentityModel package.
After that request the token from the client object calling with
ClientCredentialsTokenRequest. This request parameters should be same as
the IdentityServer — Config class. Because it will evaluate the token request
by that definitions. You can see the IdenttiyServer — Config.cs
And finally, it will return us a tokenResponse that can be access the token
with tokenResponse.AccessToken.
I have try to implement main logics of our big picture of the application. We
have discussed all microservices on the project.
You can check the whole source code of these developments into github link
of project;
aspnetrun/run-aspnet-grpc
You can’t perform that action at this time. You signed in with another
tab or window. You signed out in another tab or…
github.com
Step by Step Development w/ Course
I have just published a new course — Using gRPC in Microservices
Communication with .Net 5.
In the course, we are going to build a high-performance gRPC Inter-Service
Communication between backend microservices with .Net 5 and Asp.Net5.
References
https://grpc.io/docs/what-is-grpc/introduction/
https://grpc.io/docs/what-is-grpc/core-concepts/
https://grpc.io/docs/languages/csharp/basics/
https://developers.google.com/protocol-buffers/docs/proto3#simple
https://auth0.com/blog/implementing-microservices-grpc-dotnet-core-3/
https://www.jetbrains.com/dotnet/guide/tutorials/dotnet-days-online-2020/
build-a-highly-performant-interservice-communication-with-grpc-for-aspnet-core/
https://www.ndcconferences.com/slot/modern-distributed-systems-withgrpc-in-asp-net-core-3
http://www.canertosuner.com/post/grpc-nedir-net-core-grpc-serviceolusturma
https://medium.com/@berkemrecabuk/grpc-net-core-ile-client-server-
streaming-2824e2082a98
https://devblogs.microsoft.com/aspnet/grpc-performance-improvements-innet-5/
https://docs.microsoft.com/en-us/dotnet/architecture/cloud-native/grpc
https://medium.com/@akshitjain_74512/inter-service-communication-withgrpc-d815a561e3a1
Grpc
Microservices
Dotnet
Aspnet
Csharp
Sign up
Open in app
Search
Write
Deploying .Net Microservices to
Azure Kubernetes Services(AKS)
and Automating with Azure DevOps
Mehmet Ozkaya · Follow
Published in aspnetrun · 6 min read · Jan 13, 2021
39
In this article, we’re going to learn how to Deploying .Net Microservices into
Kubernetes, and moving deployments to the cloud Azure Kubernetes
Sign in
Services (AKS) with using Azure Container Registry (ACR) and last section is
we will learn how to Automating Deployments with Azure DevOps and
GitHub.
The image above, you can find the steps of our article structure.
We’re going to containerize our microservices on docker environment, and
push these images to the DockerHub and deploy microservices on
Kubernetes. As the same setup, we are going to shifting to the cloud for
deploying Azure Kubernetes Services (AKS) with pushing images to Azure
Container Registry (ACR).
Also we will cover additional topics that;
• Docker compose microservices
• K8s components
• Zero-downtime deployments
• Using azure resources like ACR, AKS
• Automate whole deployment process with writing custom pipelines with
Azure DevOps and so on..
Step by Step Development w/ Course
I have just published a new course — Deploying .Net Microservices with
K8s, AKS and Azure DevOps.
In this course, we’re going to learn how to Deploying .Net Microservices into
Kubernetes, and moving deployments to the cloud Azure kubernetes
services (AKS) with using Azure Container Registry(ACR) and last section is
we will learn how to Automating Deployments with CI/CD pipeline of Azure
DevOps and GitHub.
Overall Picture
See the overall picture. You can see that we will have 3 microservices which
we are going to develop and deploy together.
Shopping MVC Client Application
First of all, we are going to develop Shopping MVC Client Application For
Consuming Api Resource which will be the Shopping.Client Asp.Net MVC
Web Project. But we will start with developing this project as a standalone
Web application which includes own data inside it. And we will add
container support with DockerFile, push docker images to Docker hub and
see the deployment options like “Azure Web App for Container” resources
for 1 web application.
Shopping API Application
After that we are going to develop Shopping.API Microservice with MongoDb
and Compose All Docker Containers.
This API project will have Products data and performs CRUD operations with
exposing api methods for consuming from Shopping Client project.
We will containerize API application with creating dockerfile and push
images to Azure Container Registry.
Mongo Db
Our API project will manage product records stored in a no-sql mongodb
database as described in the picture.
we will pull mongodb docker image from docker hub and create connection
with our API project.
At the end of the section, we will have 3 microservices whichs are
Shopping.Client — Shopping.API — MongoDb microservices.
As you can see that, we have
• Created docker images,
• Compose docker containers and tested them,
• Deploy these docker container images on local Kubernetes clusters,
• Push our image to ACR,
• Shifting deployment to the cloud Azure Kubernetes Services (AKS),
• Update microservices with zero-downtime deployments.
Deploy to Azure Kubernetes Services (AKS) through CI/CD
Azure Pipelines
And the last step, we are focusing on automation deployments with creating
CI/CD pipelines on Azure Devops tool. We will develop separate
microservices deployment pipeline yamls with using Azure Pipelines.
When we push code to Github, microservices pipeline triggers, build docker
images and push the ACR, deploy to Azure Kubernetes services with zerodowntime deployments.
By the end of this articles, you’ll learn how to deploy your multi-container
microservices applications with automating all deployment process seperately.
Background
This is the introduction of the series. This will be the series of articles. You
can follow the series with below links.
• 0- Deploying .Net Microservices
• 1- Preparing Multi-Container Microservices Applications for
Deployment
• 2- Deploying Microservices on Kubernetes
• 3- Deploy Microservices into Cloud Azure Kubernetes Service (AKS) with
using Azure Container Registry (ACR)
• 4- Automate Deployments with CI/CD pipelines on Azure Devops
Prerequisites
• Visual Studio 2019 and VS Code
• Docker Desktop and Docker Account for pushing images Docker Hub
• Git on Local and Github Account for granting our devops pipelines
triggering when push the code.
• Azure Free Subscription for creating all azure resources like ACR, Web
app for Containers, AKS and so on..
• Azure Devops Account for ci/cd devops pipelines.
Source Code
Get the Source Code from AspnetRun Microservices Github — Clone or fork
this repository, if you like don’t forget the star :) If you find or ask anything
you can directly open issue on repository.
Basics — What is Docker and Container ?
Docker is an open platform for developing, shipping, and running
applications. Docker enables you to separate your applications from your
infrastructure so you can deliver software quickly.
Advantages of Docker’s methodologies for shipping, testing, and deploying
code quickly, you can significantly reduce the delay between writing code
and running it in production. Docker provides for automating the
deployment of applications as portable, self-sufficient containers that can
run on the cloud or on-premises. Docker containers can run anywhere, in
your local computer to the cloud. Docker image containers can run natively
on Linux and Windows.
Docker Container
A container is a standard unit of software that packages up code and all its
dependencies so the application runs quickly and reliably from one
computing environment to another. A Docker container image is a
lightweight, standalone, executable package of software that includes
everything needed to run an application.
Docker containers, images, and registries
When using Docker, a developer develops an application and packages it
with its dependencies into a container image. An image is a static
representation of the application with its configuration and dependencies.
In order to run the application, the application’s image is instantiated to
create a container, which will be running on the Docker host.
Containers can be tested in a development local machines.
As you can see the images above, how docker components related each
other.
Developer creates container in local and push the images the Docker
Registry.
Or its possible that developer download existing image from registry and
create container from image in local environment.
Developers should store images in a registry, which is a library of images and
is needed when deploying to production orchestrators. Docker images are
stores a public registry via Docker Hub; other vendors provide registries for
different collections of images, including Azure Container Registry.
Alternatively, enterprises can have a private registry on-premises for their
own Docker images.
If we look at the more specific example of Application Containerization with
Docker;
• First, we should write dockerfile for our application,
• Build application with this docker file and creates the docker images.
• And lastly run this images on any machine and creates running docker
container from docker image.
We will use all steps with orchestrating whole microservices application with
docker and Kubernetes for the next articles ->
• 1- Preparing Multi-Container Microservices Applications for
Deployment
References
https://docs.docker.com/get-started/overview/
https://docs.docker.com/get-started/
https://medium.com/batech/docker-nedir-docker-kavramlar%C4%B1avantajlar%C4%B1-901b37742ee0
https://www.mediaclick.com.tr/tr/blog/docker-nedir-docker-ne-ise-yarar
https://www.docker.com/resources/what-containe
Azure Kubernetes Service
Azure Devops
Azure Pipelines
Microservices
Azure Container Registry
Written by Mehmet Ozkaya
Follow
Download