On my previous post, I walked through the benefits of using PostSharp for caching in a .NET Core server application, by making it work on a single node application. In this post, we will see how we can enable Redis as the caching backend through PostSharp's modular nature.
@ 07-03-2019
by Tugberk Ugurlu

On my previous post, I walked through the benefits of using PostSharp for caching in a .NET Core server application. However, the example I have showed there would work on a single node application but as we know, probably no application today works on a single node. The benefits of deploying into multiple nodes are multiple such as providing further fault tolerance, and load distribution.

Luckily for us, PostSharp caching backend is modular and the default in-memory one I have used in my previous post can be swapped. One of the out of the box implementations is based on Redis, which is a highly scalable, distributed data structure server solution. One of the widely use cases of Redis is to be used as a ephemeral key/value store to power the caching needs of the apps.

Run Redis Locally

The best way to run Redis locally is through Docker. Let’s run the below code to do this:

docker run --name postsharp-redis -p 6379:6379 -d redis a30f1c1e991e0159fb5f96dfb053f50c50726101907c7f76d319d5e987a6cf3a

We have just got a redis instance up and running on our local environment and exposed it through TCP port mapping to the host machine to be available at port 6379. The final thing we need to do to get this ready for PostSharp usage is to set up the key-space notification to include the AKE events. You can see the Redis notifications document for details on this.

Configure for Redis Cache

First thing to do is to install the NuGet package which contains the Redis  caching backend implementation for PostSharp.

dotnet add package PostSharp.Patterns.Caching.Redis --version 6.2.8

Then, all we need to do is to change the caching backend to be the Redis implementation, which we have configured inside our Program.Main method in the previous post:

string connectionConfiguration = "";
var connection = ConnectionMultiplexer.Connect(connectionConfiguration);
var redisCachingConfiguration = new RedisCachingBackendConfiguration();
CachingServices.DefaultBackend = RedisCachingBackend.Create(connection, redisCachingConfiguration);

Notice the server address we have entered, that points to the Redis instance we have got up and running through Docker and exposed to the host through port mapping. As we used the default Redis port, we didn’t need to state it explicitly.

From this point forward, our app is all ready to run with Redis caching enabled, without a single line of code change on the app components. Only change we had to do was on the configuration side.

For production, it’s worth getting a hold of the Redis server address through a configuration system such as the one provided with ASP.NET Core so that you can swap it based on your environment.

PostSharp is a .NET library which gives you ability to program in a declarative style and allows you perform many cross-cutting concerns with a minimum amount of code by abstracting away the complexity from you. In this post, I will be looking into how PostSharp helps us for caching to speed up the performance of our applications drastically.
@ 05-04-2019
by Tugberk Ugurlu

One of the first criteria of effective code is that it does its job with as few lines of code as possible. Effective code does not repeat itself. Less code in our codebases increases our chances of having less bugs. So, how do we avoid repeating ourselves? We apply our intelligence and abstraction skills to generalize behaviors into methods and classes, the constructs offered by C# to implement abstraction which we call encapsulation. However, some features such as logging or caching cannot be properly encapsulated into a class or method. That’s why you end up having code repetition. C# alone is simply not able to properly encapsulate features like logging, caching, security, INotifyPropertyChanged, undo/redo, etc.

I have been meaning to look into Aspect-oriented programming for a while to help my code to be less noisy without sacrificing the application's acceptable performance and observability. This would help cut right to the business logic, allowing me to care about what's more important. When the topic is Aspect-oriented programming, first software comes to my mind is obviously PostSharp in .NET world and in this post, I will be looking at how PostSharp can help us cut the noise out of our code and showcase this with a sample on data caching.

Getting Started with PostSharp

First of all, let's create our project structure and install PostSharp. I have .NET Core SDK 2.2.202 installed and ran the below commands to create the empty project structure.

dotnet new web --no-https
dotnet new sln
dotnet sln 1-sample-web.sln add 1-sample-web.csproj
dotnet new globaljson

In order to give you an idea about the value proposition of PostSharp, I created this little ASP.NET Core sample which exposes HTTP APIs to read, write and modify the Cars in our system. Some of the code here is contrived such as sleeping for half a second, etc. but we will see why this will be useful for us to see the PostSharp in action.

using System;
using System.Collections.Concurrent;
using System.Collections.Generic;
using System.ComponentModel.DataAnnotations;
using System.Linq;
using System.Threading;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Builder;
using Microsoft.AspNetCore.Hosting;
using Microsoft.AspNetCore.Http;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Extensions.DependencyInjection;

namespace _1_sample_web
    public class Startup
        public void ConfigureServices(IServiceCollection services)

        public void Configure(IApplicationBuilder app, IHostingEnvironment env)

    public class CarsController : Controller
        private static readonly CarsContext _carsCtx = new CarsContext();

        public IEnumerable Get()
            return _carsCtx.GetAll();

        public IActionResult GetCar(int id) 
            var carTuple = _carsCtx.GetSingle(id);
            if (!carTuple.Item1) 
                return NotFound();

            return Ok(carTuple.Item2);

        public IActionResult PostCar(Car car) 
            var createdCar = _carsCtx.Add(car);
            return CreatedAtAction(nameof(GetCar), 
                new { id = createdCar.Id }, 

        public IActionResult PutCar(int id, Car car) 
            car.Id = id;
            if (!_carsCtx.TryUpdate(car)) 
                return NotFound();

            return Ok(car);

        public IActionResult DeleteCar(int id) 
            if (!_carsCtx.TryRemove(id)) 
                return NotFound();

            return NoContent();

    public class Car 
        public int Id { get; set; }

        public string Make { get; set; }

        public string Model { get; set; }

        public int Year { get; set; }

        [Range(0, 500000)]
        public float Price { get; set; }

    public class CarsContext
        private int _nextId = 9;
        private object _idLock = new object();

        private readonly ConcurrentDictionary _database = new ConcurrentDictionary(new HashSet> 
            new KeyValuePair(1, new Car { Id = 1, Make = "Make1", Model = "Model1", Year = 2010, Price = 10732.2F }),
            new KeyValuePair(2, new Car { Id = 2, Make = "Make2", Model = "Model2", Year = 2008, Price = 27233.1F }),
            new KeyValuePair(3, new Car { Id = 3, Make = "Make3", Model = "Model1", Year = 2009, Price = 67437.0F }),
            new KeyValuePair(4, new Car { Id = 4, Make = "Make4", Model = "Model3", Year = 2007, Price = 78984.2F }),
            new KeyValuePair(5, new Car { Id = 5, Make = "Make5", Model = "Model1", Year = 1987, Price = 56200.89F }),
            new KeyValuePair(6, new Car { Id = 6, Make = "Make6", Model = "Model4", Year = 1997, Price = 46003.2F }),
            new KeyValuePair(7, new Car { Id = 7, Make = "Make7", Model = "Model5", Year = 2001, Price = 78355.92F }),
            new KeyValuePair(8, new Car { Id = 8, Make = "Make8", Model = "Model1", Year = 2011, Price = 1823223.23F })
        public IEnumerable GetAll()
            return _database.Values;

        public IEnumerable Get(Func predicate) 
            return _database.Values.Where(predicate);

        public Tuple GetSingle(int id) 

            Car car;
            var doesExist = _database.TryGetValue(id, out car);
            return new Tuple(doesExist, car);

        public Car GetSingle(Func predicate) 
            return _database.Values.FirstOrDefault(predicate);

        public Car Add(Car car) 
                car.Id = _nextId;
                _database.TryAdd(car.Id, car);

            return car;

        public bool TryRemove(int id) 

            Car removedCar;
            return _database.TryRemove(id, out removedCar);

        public bool TryUpdate(Car car) 

            Car oldCar;
            if (_database.TryGetValue(car.Id, out oldCar)) {

                return _database.TryUpdate(car.Id, car, oldCar);

            return false;

Before going further, let's install PostSharp through NuGet. The first thing you want to install is PostSharp NuGet package which magically hooks into the compilation step thanks to its custom MSBuild scripts. The other package here will be PostSharp.Patterns.Diagnostics as I want to show you a logging example first.

dotnet add package PostSharp
dotnet add package PostSharp.Patterns.Diagnostics

Let's get the sample code from the logging documentation.

using PostSharp.Patterns.Diagnostics;
using PostSharp.Extensibility;

[assembly: Log(AttributePriority = 1, AttributeTargetMemberAttributes = MulticastAttributes.Protected | MulticastAttributes.Internal | MulticastAttributes.Public)]
[assembly: Log(AttributePriority = 2, AttributeExclude = true, AttributeTargetMembers = "get_*" )]

When you run the application now, you will be impressed and probably also be blown away by how much value and observability you get with a very little work!

PostSharp Caching Example

The main reason for me to explore PostSharp is for caching and this is where PostSharp Caching shines really. Let's run our sample application again and perform a mini load test on it.

1..10 | foreach {write-host "$([Math]::Round((Measure-Command -Expression { Invoke-WebRequest -Uri http://localhost:5000/cars }).TotalMilliseconds, 1))"}

You will notice that each call to the "/cars" endpoint takes more than 500ms, which is fair due to us sleeping that amount of time on purpose. However, this could well be the case when you connect to a data store in a real world example. Even if your data store is performant and gets the result instantly, we are still wasting resources here because the data hasn't changed and we would be doing an unnecessary trip to the database to get the data which we have already retrieved previously.

Caching is the solution to this problem. However, it's not really easy to get right on your own in a web application which is multithreaded in its nature. You can use built-in APIs such as the ones come from ASP.NET Core but you then need to express your caching requirements in code, in a verbose way which will make it hard to understand the business logic behind a cluttered codebase and suddenly, you will be struggling to add or modify functionality in an existing software.

Let's see how PostSharp can help us here. First, we need to add the caching support by installing PostSharp.Patterns.Caching NuGet package.

dotnet add package PostSharp.Patterns.Caching

Then, we need to make some changes to our code to enable caching. Here is the git patch which shows you what exactly I have changed:

From a20fc8e95ffd9bf5d424467e0e1283ae5891454a Mon Sep 17 00:00:00 2001
From: Tugberk Ugurlu
Date: Tue, 9 Apr 2019 23:38:32 +0100
Subject: [PATCH] add caching

 postsharp/0-caching/1-sample-web/1-sample-web.csproj | 1 +
 postsharp/0-caching/1-sample-web/Program.cs          | 3 +++
 postsharp/0-caching/1-sample-web/Startup.cs          | 4 +++-
 3 files changed, 7 insertions(+), 1 deletion(-)

diff --git a/postsharp/0-caching/1-sample-web/1-sample-web.csproj b/postsharp/0-caching/1-sample-web/1-sample-web.csproj
index bd55b6c..008c486 100644
--- a/postsharp/0-caching/1-sample-web/1-sample-web.csproj
+++ b/postsharp/0-caching/1-sample-web/1-sample-web.csproj
@@ -10,6 +10,7 @@
diff --git a/postsharp/0-caching/1-sample-web/Program.cs b/postsharp/0-caching/1-sample-web/Program.cs
index 3dcae2c..9d241eb 100644
--- a/postsharp/0-caching/1-sample-web/Program.cs
+++ b/postsharp/0-caching/1-sample-web/Program.cs
@@ -7,6 +7,8 @@ using Microsoft.AspNetCore;
 using Microsoft.AspNetCore.Hosting;
 using Microsoft.Extensions.Configuration;
 using Microsoft.Extensions.Logging;
+using PostSharp.Patterns.Caching;
+using PostSharp.Patterns.Caching.Backends;
 using PostSharp.Patterns.Diagnostics;
 using PostSharp.Patterns.Diagnostics.Backends.Console;
@@ -18,6 +20,7 @@ namespace _1_sample_web
         public static void Main(string[] args)
             LoggingServices.DefaultBackend = new ConsoleLoggingBackend();
+            CachingServices.DefaultBackend = new MemoryCachingBackend();
diff --git a/postsharp/0-caching/1-sample-web/Startup.cs b/postsharp/0-caching/1-sample-web/Startup.cs
index 18b3dbc..bed37ca 100644
--- a/postsharp/0-caching/1-sample-web/Startup.cs
+++ b/postsharp/0-caching/1-sample-web/Startup.cs
@@ -10,6 +10,7 @@ using Microsoft.AspNetCore.Hosting;
 using Microsoft.AspNetCore.Http;
 using Microsoft.AspNetCore.Mvc;
 using Microsoft.Extensions.DependencyInjection;
+using PostSharp.Patterns.Caching;
 namespace _1_sample_web
@@ -115,7 +116,8 @@ namespace _1_sample_web
             new KeyValuePair(7, new Car { Id = 7, Make = "Make7", Model = "Model5", Year = 2001, Price = 78355.92F }),
             new KeyValuePair(8, new Car { Id = 8, Make = "Make8", Model = "Model1", Year = 2011, Price = 1823223.23F })

+        [Cache]
         public IEnumerable GetAll()
2.15.2 (Apple Git-101.1)

Couple of things we have done here:

  • In our entry point, we configured the cache backend we wanted to use which in our case is the MemoryCache.
  • We marked the CarContext.GetAll method with the CacheAttribute.

Believe it or not, this is pretty much it! When we run the sample mini load test, you will see the dramatic difference even if we are seeing a higher response time on the first load.

Again, very little work but tremendous gain in terms of value!

We have improved our performance drastically but introduced a very nasty problem now: serving stale data. Thankfully, PostSharp has a solution to cache invalidation out of the box without losing our declarative nature for simple cases. For this, we need to use InvalidateCacheAttribute aspect. When this attribute is applied to a method, it causes any call to this method to remove from the cache the value of one or more other methods. It’s worth noting that the cached methods are matched, by type and name, against the parameters of the invalidating method. PostSharp compilation takes care of the rest during the build step to set up all the invalidation logic.

For example, the below changes makes it possible for us to invalidate the cache of a single car entity for example when it’s updated.

From f0889e68e55298e43360e01dd3b0e8b1cf6468e3 Mon Sep 17 00:00:00 2001
From: Tugberk Ugurlu
Date: Tue, 30 Apr 2019 09:40:21 +0100
Subject: [PATCH] cache invalidation, declarative

 postsharp/0-caching/1-sample-web/Startup.cs | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/postsharp/0-caching/1-sample-web/Startup.cs b/postsharp/0-caching/1-sample-web/Startup.cs
index bed37ca..ec95d1e 100644
--- a/postsharp/0-caching/1-sample-web/Startup.cs
+++ b/postsharp/0-caching/1-sample-web/Startup.cs
@@ -62,7 +62,7 @@ namespace _1_sample_web
         public IActionResult PutCar(int id, Car car) 
             car.Id = id;
-            if (!_carsCtx.TryUpdate(car)) 
+            if (!_carsCtx.TryUpdate(id, car)) 
                 return NotFound();
@@ -130,6 +130,7 @@ namespace _1_sample_web
             return _database.Values.Where(predicate);
+        [Cache]
         public Tuple GetSingle(int id) 
@@ -166,7 +167,8 @@ namespace _1_sample_web
             return _database.TryRemove(id, out removedCar);
-        public bool TryUpdate(Car car) 
+        [InvalidateCache(nameof(GetSingle))]
+        public bool TryUpdate(int id, Car car) 
2.20.1 (Apple Git-117)

However, this only invalidates the GetSingle method and we still have problem of serving stale data from GetAll method. There is also an ability out of the box to to imperatively invalidate an item from the cache which is very handy for cases where we cannot simply invalidate the cache purely based on method signature. You can see below an example of how this looks like. 

From f629b295fc8f9bbd44904284cb0ec832d51185be Mon Sep 17 00:00:00 2001
From: Tugberk Ugurlu
Date: Tue, 30 Apr 2019 09:55:44 +0100
Subject: [PATCH] cache invalidation, imperatively

 postsharp/0-caching/1-sample-web/Startup.cs | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/postsharp/0-caching/1-sample-web/Startup.cs b/postsharp/0-caching/1-sample-web/Startup.cs
index ec95d1e..8ee6652 100644
--- a/postsharp/0-caching/1-sample-web/Startup.cs
+++ b/postsharp/0-caching/1-sample-web/Startup.cs
@@ -67,6 +67,10 @@ namespace _1_sample_web
                 return NotFound();
+            CachingServices.Invalidation.Invalidate(
+                typeof(CarsContext).GetMethod(nameof(CarsContext.GetAll)), 
+                _carsCtx);
             return Ok(car);
2.20.1 (Apple Git-117)

We Invalidate the GetAll method cache on the given CarsContext instance when we have an update on any of the items.

This is all I want to cover on this post in terms of the API surface area of PostSharp and I hope this gives you taste of how simple it’s to get going with PostSharp. PostSharp Caching documentation is also very comprehensive and I recommend you to check that out for further details.


The biggest limitation I have seen with PostSharp is its lack of .NET Core compilation support outside of Windows at the time of writing (you may check the current status here). You can run PostSharp on .NET Core, even outside of Windows. However, you first need a Windows machine to be able to compile your code.

Apart from this, there is also a trade off for you to make with PostSharp which is the increased build time. However, with incremental builds, this additional increase can become noticeable. Besides this, compared to the value you got from the tool, I think this is trade-off which is well worth to be made.


This post just touches the surface on what you can achieve with PostSharp. In terms of caching for example, there is even a support for Redis which is very suitable for horizontally scaled web applications where multiple nodes serve HTTP requests.

PostSharp provides help on many other various patterns such as mutithreading. You can get started with PostSharp with PostSharp Essentials, the free but project-size-limited edition.

If you have never been exposed to software system design challenges, you might be totally lost on even where to begin. Dive into this post to find out about what matters when it comes to software architecture and system design and how you can get your grip in this wide area of software engineering.
@ 02-23-2019
by Tugberk Ugurlu

If you have never been exposed to software software system design challenges, you might be totally lost on even where to begin. I believe in finding the limits to a certain extend first and then start getting your hands dirty. The way you can start this is by finding some interesting product or services (ideally you are a fan of), and learning about their implementations. You will be surprised that how simple they may look, they most probably involve great deal of complexity. Don’t forget: simple is usually complex and that’s OK™.

Photo by Isaac Smith on Unsplash

I believe the biggest suggestion I can give you while approaching to system design challenges is this: not to assume anything! You should pin down the facts and expectations from this system first. Some good questions to ask here are which will help you start this process:

  • What is the problem you are trying to solve? 
  • What is the the peak volume of users that will interact with your system?
  • What are the data write and read patterns going to be?
  • What are the expected failure cases, how do you plan to mitigate them?
  • What are the availability and consistency expectations?
  • Do you need to worry about any auditing, regulation aspects?
  • What type of sensitive data are you going to be storing?
These are just a questions few that have worked for me and the teams that I worked with over the years. Once you have answers to these questions (or any other which are relevant to the context you are in), then you should be starting to dive into the technical side of the problem.

Setting Your Baseline

What do I mean by the baseline here? Well, in this era of software development, most of the problems "can" be solved by already existing techniques and technologies. Knowing these to a certain extend will give you a head start when you are faced with similar problems. Remember, we are writing software to solve business' and our users' problems and the desire is to do that in a most straight-forward and simple way from a user experience point of view. Why do you need to remember this? It could well be your reality that you should solve problems in unique ways as you might be thinking "what's the point of me writing software then if I am here to follow a pattern?". The craft here is in the decision making process to define where to do what. Surely, we may have challenging, unique problems which we can face at certain times. However, if we have our baseline solid, we will surely know whether we should direct our efforts into finding out ways to solve the problems or further understand the depth of it.

I believe I have convinced you at this point now that having a solid knowledge on how some of the exciting systems are architecturally shaped is quite critical for you to progress on having some appreciation on the craft and a solid baseline.

OK, but where to start? Donne Martin has a GitHub repo called system-design-primer which helps you learn how to design large-scale systems and also prep for the system design interviews. Inside this, there is a section dedicated to real world architectures which also involves some system designs of well-known companies such as Twitter, Uber, etc.

However, before jumping into this, you might want to have some insights on what matters the most in the architectural challenges. This is important because there are A LOT of aspects involved in disambiguating a gnarly, ambiguous problem and solving it within the guidelines of a defined system. Jackson Gabbard, an ex-Facebook employee, has a 50 mins video on system design interviews based on his experience on interviewing hundreds of candidates at Facebook. Even if this is focused on the system design interview objective and what success looks like for that, it's still a very comprehensive resource on what matters the most when it comes to system design. There is also a write-up of this video.

Start Building up Your Data Storage and Retrieval Knowledge

Most of the time, the choice of how you decide to persist and serve data will play a crucial role on the performance of your system. Therefore, you should be able to understand the expectations around data writes and reads about your system first. Then, you should be able to assess these and convert that assessment into a choice. However, you can only do this effectively if you know the existing storage patterns. This essentially means having a good knowledge around database choices. 

Databases are really scalable and durable data structures. So, all your knowledge around data structures should be really beneficial around understanding the various database choices. For example, Redis is a data structures server, supporting different kinds of values. It allows you to work with the concept of data strictures such as sets and lists, and provides you to read data through commonly-known algorithms such as LRU in a durable and highly available fashion. 

Photo by Samuel Zeller on Unsplash

Once you get enough grip around the various data storage patterns, it's now time for you to get into data consistency and availability land. CAP theorem is the first thing you should try to have a good grip of, which you can polish it off by looking deeper into established consistency and availability patterns. These will allow you to have a wide spectrum when it comes to understanding data writes and reads are really very separate concerns and have separate challenges associated to them. By embracing several consistency and availability patterns, you can gain a lot of performance while serving the data to your applications.

Finally around data storage needs, you should also be aware of caching. Should it be both on the client and server?  What data will you cache?  And why?  How will you invalidate the cache?  (will it be based on time?  If so, how long?). This section of system-design-primer should be a good starting point on this topic. 

Communication Patterns

Systems are composed of various components, which can be different processes living inside the same physical node or different machines sitting at the separate parts in your network. Some of these resources might be private within your network but some needs to be accessed publicly by your consumers.

These resources needs to be able to communicate between them and to the outside world. In context of system design, this again introduces another set of unique challenges. Understanding how asynchronous workflows can help you and what are the various communication patterns available such as TCP, UDP, HTTP (which sits on top of TCP), etc. will help you understand the breadth of the problem space and solutions currently available. 

Photo by Tony Stoddard on Unsplash

When dealing with communication to the outside world, security is always another side-effect that you need to be aware of and actively deal with. 

Connection Distribution

I am not sure if this logical grouping makes sense here. I will go with it anyway since it’s the closest term that reflects what I want to cover here.
Systems are formed by gluing multiple components together, and how they communicate with each other often is designed through well-established protocols such as TCP and UDP. However, these protocols are often not enough on their own to cover the needs of today’s systems which can have high load and demands from our consumers. We often need ways to be able to distribute connections in order to handle the high load of our system.

Domain Name System (DNS) sits at the core of this distribution. A DNS translates a domain name such as www.example.com to an IP address. Besides this, some DNS services can route traffic through various methods such as weighted round robin and latency-based to help distribute the load.

Load balancing is very vital and nearly every major system on the Web we interact with today sits behind one or multiple load balancers. Load balancers help us distribute incoming client requests to multiple instances of resources. There both hardware and software forms of load balancers but it’s often that you see software based ones used such as HAProxy and ELBReverse proxies are also very smilar to the concept of load balancing with some distinctive differences though. These differences will have an effect on your choice based the needs.

Content Delivery Networks (CDN) are also something which you should be aware of. A CDN is a globally distributed network of proxy servers, serving content from locations closer to the user. CDNs are usually preferred when you are serving static files such as JavaScript, CSS and HTML. It’s also common that you see cloud services offer traffic managers (such as Azure Traffic Manager) which gives you global distribution and reduced latency benefits for your dynamic content. However, these services are mostly beneficial if you have stateless web services.  

What About My Business Logic? Structuring Business Logic, Workflows and Components

Thus far, we talked about all the infrastructure related aspects of a system. These are the parts of your system which your users probably have no idea about and to be frank, they don't give a damn about them. What they care about is how they interact with your system, what they can achieve by doing so and how the system acts on behalf of them to make certain decisions and process their data.

As you might guess from this post’s title, I intended this blog post to be about software architecture and system design. Therefore, I wasn’t going to cover the software design patterns which are concerned with how the components are built. However, thinking about this more and more, it’s clear to me that the line between them are very blurred and usually both sides are interconnected. Take Event Sourcing for example. Once you adopt this software architecture pattern, it pretty much effects most parts of your system; how you persist data, what level consistency you choose for your system’s clients to deal with, how you shape the components within your system, so on and so forth. Therefore, I decided to touch on some of the design and architectural patterns related which directly concerns your business logic. Even if it’s going to be just touching the surface, it should be useful for you have some ideas. Here is a few of them:

Collaboration Approaches

It's highly unlikely that you are going to be the only one involved in a project where you need to be part of a system design process. Therefore, you need to be able to collaborate with other folks in your team, both inside and outside of your job function. There is also a breadth and depth of this surface area and as the technical leader, you should be able to address the concerns on each level by going into it with a required depth. The activities here may involve evaluating technology choices together or pinning down the business needs and understanding how the work needs to be parallelised.

Photo by Kaleidico on Unsplash

First and foremost, you need to have an accurate and shared understanding of what you are trying to achieve as a business goal and what moving parts involved in this aspect. Group modeling techniques such as event storming are powerful methods to accelerate this process and increases your changes of success. You may get into this process before or after you define your service boundaries, deepening on your product/service maturity stage. Based on the level of alignment you see here, you may want to facilitate a separate activity to define the Ubiquitous Language for the bounded context you are operating on. When it comes to communicating the architecture of your system, you may find the C4 model for software architecture from Simon Brown useful, especially when it comes to understanding what level of depth you should go into while visualising what you are trying to convey.

There are most probably other mature techniques available in this space. However, all will tie back to your domain understanding and your experience and knowledge around Domain-driven Design will prove to be handy. 

Some Other Resources

Here are some resources which may help you. These are not in any particular oder.

Long time ago (about 5 years, at least), I contributed an article to SignalR wiki about scaling SignalR with Redis. You can still find the article here. I also blogged about it here. However, over time, pictures got lost there. I got a few requests from my readers to refresh those images and I was luckily able to find them :) I decided to publish that article here so that I would have a much better control over the content.
@ 08-08-2018
by Tugberk Ugurlu

Long time ago (about 5 years, at least), I contributed an article to SignalR wiki about scaling a SignalR application with Redis. You can still find the article here. I also blogged about it here. However, over time, pictures got lost there. I got a few requests from my readers to refresh those images and I was lucky enough to be able to find them :) I decided to publish that article here so that I would have a much better control over the content. So, here is the post :)

Please keep in mind that this is a really old post and lots of things have evolved since then. However, I do believe the concepts still resonate and it’s valuable to show the ways of how to achieve this within a cloud provider’s context.

SignalR with Redis Running on a Windows Azure Virtual Machine

This wiki article will walk your through on how you can run your SignalR application in multiple machines with Redis as your backplane using Windows Azure Virtual Machines for scale out scenarios.

Creating the Windows Azure Virtual Machines

First of all, we will spin up our virtual machines. What we want here is to have two Windows Server 2008 R2 virtual machines for our SignalR application and we will name them as Web1-08R2 and Web2-08R2. We will have the IIS installed on both of these servers and at the end, we will load balance the request on port 80.

Our third virtual machine will be another Windows Server 2008 R2 only for our Redis server. We will call this server Redis-08R2.

To spin up the VMs, go to new Windows Azure Management Portal and hit New icon at the bottom-right corner.


Creating a virtual machine running Windows Server 2008 R2 is explained here in details. We followed the same steps to create our first VM named Web1-08R2.

The second VM we will be creating has a slightly different approach than the first one. Under the hood, every virtual machine is a cloud service instance and we want to put our second VM (Web2-08R2) under the same cloud service that our first web VM is running under. To do that, we need to follow the same steps as explained inside the previously mentioned article but when we come to 3rd step in the creation wizard, we should chose Connect to existing Virtual Machine option this time and we should choose our first VM we have just created.


As the last step, we now need to create our redis VM which will be named Redis-08R2. We will follow the same steps as we did when we were creating our second web VM (Web2-08R2).

Setting Up Redis as a Windows Service

To use Redis on a Windows machine, we went to Redis on Windows prototype GitHub page and cloned the repository and followed the steps explained under How to build Redis using Visual Studio section.

After you build the project, you will have all the files you need under msvs\bin\release path as zip files. redisbin.zip file will contain the redis server, redis command line interface and some other stuff. rediswatcherbin.zip file will contain the msi file to install redis as a windows service. You can just copy those zip files to your Redis VM and extract redisbin.zip under c:\redis\bin. Then follow the steps:

  • Currently, there is a bug in the RedisWatcher installer and if you don't have Microsoft Visual C++ 2010 Redistributable Package installed on your machine, the service won't start. So, I installed it first.

  • Copy this redis.conf file and put it under c:\redis\bin directory. Open it up and add a password by adding the following line of code:

    requirepass 1234567

    Take this note into considiration when you are setting up your redis password:

    Warning: since Redis is pretty fast an outside user can try up to 150k passwords per second against a good box. This means that you should use a very strong password otherwise it will be very easy to break.

  • Then, extract the rediswatcherbin.zip somewhere and run the InstallWatcher.msito install the service.

  • Navigate to C:\Program Files (x86)\RedisWatcher directory. You will see a file named watcher.conf inside this directory. Open this file up and replace the entire file with the following text. Only difference here is that we are supplying the redis.conf file directory for the server to use:

    exepath c:\redis\bin
    exename redis-server.exe
     workingdir c:\redis\inst1
     runmode hidden
     saveout 1
     cmdparms c:\redis\bin\redis.conf
  • Create a folder named inst1 under c:\redis because we have specified this folder as working directory for our redis instance.

  • When you do a search against windows services in PowerShell, you will see RedisWatcherSvc service is installed.


  • Run the following PowerShell command to start the service for the first time.

    (Get-Service -Name RedisWatcherSvc).Start()

Now we have a Redis server running on our VM. To test if it is actually running, open up a windows command window under c:\redis\bin and run the following command (assuming you set your password 1234567):

redis-cli -h localhost -p 6379 -a 1234567

Now, you have a redis client running.


Ping the redis to see if you are really authenticated:


Now, we are nearly set. As a last step in our redis server, we need to open up TCP port 6379 for external communication. You can do this under Windows Firewall with Advanced Security window as explained here.


Communicating Through Internal Endpoints Between Windows Azure Virtual Machines Under Same Cloud Service

When you are inside one of your web VMs, you can simply look up the redis VM by hostname.


The hostname will resolve to DIP (Dynamic IP Address) which Windows Azure will use internally. We can configure public endpoints through Windows Azure Management Portal easily but in that case, we would be opening redis to the whole world. Also, if we communicate to our redis server through VIP (Virtual IP Address), we would always go through the load balancer which has its own additional cost.

So, we can easily connect to our redis server from any other connected VM by hostname.

The SignalR Application with Redis

Our SignalR application will not be that much different from a normal SignalR application thanks to SignalR.Redis project. All you need to do is to add the SignalR.Redis nuget package into your application and configure SignalR to use Redis as the message bus inside the Application_Start method in Global.asax.cs file:

protected void Application_Start(object sender, EventArgs e)
    // Hook up redis
    string server = ConfigurationManager.AppSettings["redis.server"];
    string port = ConfigurationManager.AppSettings["redis.port"];
    string password = ConfigurationManager.AppSettings["redis.password"];

    GlobalHost.DependencyResolver.UseRedis(server, Int32.Parse(port), password, "SignalR.Redis.Sample");

For our demo, the AppSettings should look like as below:

    <add key="redis.server" value="Redis-08R2" />
    <add key="redis.port" value="6379" />
    <add key="redis.password" value="1234567" />

I put the application under IIS on our both web servers (Web1-08R2 and Web2-08R2) and configured them to run under .NET Framework 4.0 integrated application pool.

For this demo, I am using the Redis.Sample chat application included inside the SignalR.Redis project.

Let's test them quickly before going public. I fired the both web applications inside the servers and here is the result:


Perfectly running! Let's open them up to the world.

Opening up the Port 80 and Load Balancing the Requets

Our requirement here is to make our application reachable over HTTP and at the same time, we want to load balance the request between our two web servers.

To do that, we need to go to Windows Azure Management portal and set up the TCP endpoints for port 80.

First, we navigate to dashboard of our Web1-08R2 VM and hit Endpoints from the dashboard menu:


From there, hit the End Endpoint icon at the bottom of the page:


A wizard is going to appear on the screen:


Click the right-arrow icon and go to next step which is the last one and we will enter the port details there:


After that, our endpoint will be created:


Follow the same steps of Web2-08R2 VM as well and open the Add Endpoint wizard. This time, we will be able to select Load-balance traffic on an existing port. Chose the previously created port and continue:


At the last step, enter the proper details and hit save:


We will see our new endpoint is being crated but this time Load Balanced column indicates Yes.


As we configured our web applications without a host name and they are exposed through port 80, we can directly run reach our application through the URL or Public Virtual IP Address (VIP) which is provided to us. When we run our application, we should see it running as below:


No matter which server it goes, the message will be broadcasted to every client because we will be using Redis as a message bus.


A while ago, I have written up on Graphs and gave a few examples about their application for real world problems. In this post, I want to talk about one of the most common graph algorithms, Depth-first search (DFS).
@ 07-28-2018
by Tugberk Ugurlu

A while ago, I have written up on Graphs and gave a few examples about their application for real world problems. I absolutely love graphs as they are so powerful to model the data for several key computer science problems. In this post, I want to talk about one of the most common graph algorithms, Depth-first search (DFS) and how and where it could be useful.

What is Depth-First Search (DFS)?

DFS is a specific algorithm for traversing and searching a graph data structure. Depending on the type of graph, the algorithm might differ. However, the idea is actually quite simple for a Directed Acyclic Graph (DAG):

  1. You start with a source vertex (let's call it "S")
  2. You visit the first neighbour vertex of that node (let's call this "N")
  3. You do the same for "N" and you keep going till you end up at a leaf vertex (L) (which is a vertex that has no edges to another vertex)
  4. Then you visit the second neighbour of L's parent vertex.
  5. You would be once you exhaust all the vertices.

I must admit that this is a bit simplified version of the algorithm even for a DAG. For instance, we didn't touch on the fact that we might end up actually visiting the same vertex multiple times if we don't take this into account in our algorithm. There is a really good visualization of this algorithm here where you can observe how the algorithm works in a visual way through a logical graph representation.


Application of Depth-First Search

There are various applications of DFS which are used to solve particular problems such as Topological Sorting and detecting cycle in a graph. There are also occasions where DFS is used as part of another known algorithm to solve a real world problem. One example to that is the Tarjan’s Algorithm to find Strongly Connected Components.

This is also a good resource which lists out different real world applications of DFS.

Other Graph Traversal Algorithms

As you might guess, DFS is not the only known algorithm in order to traverse a graph data structure. Breadth-First Search (BFS) is a another most known graph traversal algorithm which has the similar semantics to DFS but instead of going in depth on a vertex, it prefers visit the all the neighbors of the current vertex. Bidirectional search is another one of the traversal algorithms which is mainly used to find a shortest path from an initial vertex to a goal vertex in a directed graph.

Hi 🙋🏻‍♂️ I'm Tugberk Ugurlu.
Coder 👨🏻‍💻, Speaker 🗣, Author 📚, Microsoft MVP 🕸, Blogger 💻, Software Engineering at Deliveroo 🍕🍜🌯, F1 fan 🏎🚀, Loves travelling 🛫🛬
Lives in Cambridge, UK 🏡