Sorted By: Tag (asp-net)

ASP.NET Core Authentication in a Load Balanced Environment with HAProxy and Redis

Token based authentication is a fairly common way of authenticating a user for an HTTP application. However, handling this in a load balanced environment has always involved extra caring. In this post, I will show you how this is handled in ASP.NET Core by demonstrating it with HAProxy and Redis through the help of Docker.
2016-11-28 23:31
Tugberk Ugurlu


Token based authentication is a fairly common way of authenticating a user for an HTTP application. ASP.NET and its frameworks had support for implementing this out of the box without much effort with different type of authentication approaches such as cookie based authentication, bearer token authentication, etc. ASP.NET Core is a no exception to this and it got even better (which we will see in a while).

However, handling this in a load balanced environment has always involved extra caring as all of the nodes should be able to read the valid authentication token even if that token has been written by another node. Old-school ASP.NET solution to this is to keep the Machine Key in sync with all the nodes. Machine key, for those who are not familiar with it, is used to encrypt and decrypt the authentication tokens under ASP.NET and each machine by default has its own unique one. However, you can override this and put your own one in place per application through a setting inside the Web.config file. This approach had its own problems and with ASP.NET Core, all data protection APIs have been revamped which cleared a room for big improvements in this area such as key expiration and rolling, key encryption at rest, etc. One of those improvements is the ability to store keys in different storage systems, which is what I am going to touch on in this post.

The Problem

Imagine a case where we have an ASP.NET Core application which uses cookie based authentication and stores their user data in MongoDB, which has been implemented using ASP.NET Core Identity and its MongoDB provider.

1-ok-wo-lb

This setup is all fine and our application should function perfectly. However, if we put this application behind HAProxy and scale it up to two nodes, we will start seeing problems like below:

System.Security.Cryptography.CryptographicException: The key {3470d9c3-e59d-4cd8-8668-56ba709e759d} was not found in the key ring.
   at Microsoft.AspNetCore.DataProtection.KeyManagement.KeyRingBasedDataProtector.UnprotectCore(Byte[] protectedData, Boolean allowOperationsOnRevokedKeys, UnprotectStatus& status)
   at Microsoft.AspNetCore.DataProtection.KeyManagement.KeyRingBasedDataProtector.DangerousUnprotect(Byte[] protectedData, Boolean ignoreRevocationErrors, Boolean& requiresMigration, Boolean& wasRevoked)
   at Microsoft.AspNetCore.DataProtection.KeyManagement.KeyRingBasedDataProtector.Unprotect(Byte[] protectedData)
   at Microsoft.AspNetCore.Antiforgery.Internal.DefaultAntiforgeryTokenSerializer.Deserialize(String serializedToken)

Let’s look at the below diagram to understand why we are having this problem:

2-not-ok-w-lb

By default, ASP.NET Core Data Protection is wired up to store its keys under the file system. If you have your application running under multiple nodes as shown in above diagram, each node will have its own keys to protect and unprotect the sensitive information like authentication cookie data. As you can guess, this behaviour is problematic with the above structure since one node cannot read the protected data which the other node protected.

The Solution

As I mentioned before, one of the extensibility points of ASP.NET Core Data Protection stack is the storage of the data protection keys. This place can be a central place where all the nodes of our web application can reach out to. Let’s look at the below diagram to understand what we mean by this:

4-ok-w-lb

Here, we have Redis as our Data Protection key storage. Redis is a good choice here as it’s a well-suited for key-value storage and that’s what we need. With this setup, it will be possible for both nodes of our application to read protected data regardless of which node has written it.

Wiring up Redis Data Protection Key Storage

With ASP.NET Core 1.0.0, we had to write the implementation by ourselves to make ASP.NET Core to store Data Protection keys on Redis but with 1.1.0 release, the team has simultaneously shipped a NuGet package which makes it really easy to wire this up: Microsoft.AspNetCore.DataProtection.Redis. This package easily allows us to swap the data protection storage destination to be Redis. We can do this while we are configuring services as part of ConfigureServices:

public void ConfigureServices(IServiceCollection services)
{
    // sad but a giant hack :(
    // https://github.com/StackExchange/StackExchange.Redis/issues/410#issuecomment-220829614
    var redisHost = Configuration.GetValue<string>("Redis:Host");
    var redisPort = Configuration.GetValue<int>("Redis:Port");
    var redisIpAddress = Dns.GetHostEntryAsync(redisHost).Result.AddressList.Last();
    var redis = ConnectionMultiplexer.Connect($"{redisIpAddress}:{redisPort}");

    services.AddDataProtection().PersistKeysToRedis(redis, "DataProtection-Keys");
    services.AddOptions();

    // ...
}

I have wired it up exactly like this in my sample application in order to show you a working example. It’s an example taken from ASP.NET Identity repository but slightly changed to make it work with MongoDB Identity store provider.

Note here that configuration values above are specific to my implementation and it doesn’t have to be that way. See these lines inside my Docker Compose file and these inside my Startup class to understand how it’s being passed and hooked up.

The sample application can be run on Docker through Docker Compose and it will get a few things up and running:

  • Two nodes of the application
  • A MongoDB instance
  • A Redis instance

Image

You can see my docker-compose.yml file to understand how I hooked things together:

mongo:
    build: .
    dockerfile: mongo.dockerfile
    container_name: haproxy_redis_auth_mongodb
    ports:
      - "27017:27017"

redis:
    build: .
    dockerfile: redis.dockerfile
    container_name: haproxy_redis_auth_redis
    ports:
      - "6379:6379"

webapp1:
    build: .
    dockerfile: app.dockerfile
    container_name: haproxy_redis_auth_webapp1
    environment:
      - ASPNETCORE_ENVIRONMENT=Development
      - ASPNETCORE_server.urls=http://0.0.0.0:6000
      - WebApp_MongoDb__ConnectionString=mongodb://mongo:27017
      - WebApp_Redis__Host=redis
      - WebApp_Redis__Port=6379
    links:
      - mongo
      - redis

webapp2:
    build: .
    dockerfile: app.dockerfile
    container_name: haproxy_redis_auth_webapp2
    environment:
      - ASPNETCORE_ENVIRONMENT=Development
      - ASPNETCORE_server.urls=http://0.0.0.0:6000
      - WebApp_MongoDb__ConnectionString=mongodb://mongo:27017
      - WebApp_Redis__Host=redis
      - WebApp_Redis__Port=6379
    links:
      - mongo
      - redis

app_lb:
    build: .
    dockerfile: haproxy.dockerfile
    container_name: app_lb
    ports:
      - "5000:80"
    links:
      - webapp1
      - webapp2

HAProxy is also configured to balance the load between two application nodes as you can see inside the haproxy.cfg file, which we copy under the relevant path inside our dockerfile:

global
  log 127.0.0.1 local0
  log 127.0.0.1 local1 notice

defaults
  log global
  mode http
  option httplog
  option dontlognull
  timeout connect 5000
  timeout client 10000
  timeout server 10000

frontend balancer
  bind 0.0.0.0:80
  mode http
  default_backend app_nodes

backend app_nodes
  mode http
  balance roundrobin
  option forwardfor
  http-request set-header X-Forwarded-Port %[dst_port]
  http-request set-header Connection keep-alive
  http-request add-header X-Forwarded-Proto https if { ssl_fc }
  option httpchk GET / HTTP/1.1\r\nHost:localhost
  server webapp1 webapp1:6000 check
  server webapp2 webapp2:6000 check

All of these are some details on how I wired up the sample to work. If we now look closely at the header of the web page, you should see the server name written inside the parenthesis. If you refresh enough, you will see that part alternating between two server names:

Image

Image

This confirms that our load is balanced between the two application nodes. The rest of the demo is actually very boring. It should just work as you expect it to work. Go to “Register” page and register for an account, log out and log back in. All of those interactions should just work. If we look inside the Redis instance, we should also see that Data Protection key has been written there:

docker run -it --link haproxy_redis_auth_redis:redis --rm redis redis-cli -h redis -p 6379
LRANGE DataProtection-Keys 0 10

Image

Conclusion and Going Further

I believe that I was able to show you what you need to care about in terms of authentication when you scale our your application nodes to multiple servers. However, do not take my sample as is and apply to your production application :) There are a few important things that suck on my sample, like the fact that my application nodes talk to Redis in an unencrypted fashion. You may want to consider exposing Redis over a proxy which supports encryption.

The other important bit with my implementation is that all of the nodes of my application act as Data Protection key generators. Even if I haven’t seen much problems with this in practice so far, you may want to restrict only one node to be responsible for key generation. You can achieve this by calling DisableAutomaticKeyGeneration like below during the configuration stage on your secondary nodes:

public void ConfigureServices(IServiceCollection services)
{
     services.AddDataProtection().DisableAutomaticKeyGeneration();
}

I would suggest determining whether a node is primary or not through a configuration value so that you can override this through an environment variable for example.

Moving to ASP.NET Core RC2: Tooling

.NET Core Runtime RC2 has been released a few days ago along with .NET Core SDK Preview 1. At the same time of .NET Core release, ASP.NET Core RC2 has also been released. While I am migrating my projects to RC2, I wanted to write about how I am getting each stage done. In this post, I will show you the tooling aspect of the changes.
2016-05-22 14:04
Tugberk Ugurlu


.NET Core Runtime RC2 has been released a few days ago along with .NET Core SDK Preview 1. At the same time of .NET Core release, ASP.NET Core RC2 has also been released. Today, I started doing the transition from RC1 to RC2 and I wanted to write about how I am getting each stage done. Hopefully, it will be somewhat useful to you as well. In this post, I want to talk about the tooling aspect of the transition.

Get the dotnet CLI Ready

One of the biggest shift from RC1 and RC2 is the tooling. Before, we had DNVM, DNX and DNU as command line tools. All of them are now gone (RIP). Instead, we have one command line tool: dotnet CLI. First, I installed dotnet CLI on my Ubuntu 14.04 VM by running the following script as explained here:

sudo sh -c 'echo "deb [arch=amd64] https://apt-mo.trafficmanager.net/repos/dotnet/ trusty main" > /etc/apt/sources.list.d/dotnetdev.list'
sudo apt-key adv --keyserver apt-mo.trafficmanager.net --recv-keys 417A0893
sudo apt-get update
sudo apt-get install dotnet-dev-1.0.0-preview1-002702

This installed dotnet-dev-1.0.0-preview1-002702 package on my machine and I am off to go:

image

You can also use apt-cache to see all available versions:

image

Just to make things clear, I deleted ~/.dnx folder entirely to get rid of all RC1 garbage.

Get Visual Studio Code Ready

At this stage, I had the C# extension installed on my VS Code instance on my Ubuntu VM which was only working for DNX based projects. So, I opened up VS Code and updated my C# extension to latest version (which is 1.0.11). After the upgrade, I opened up a project which was on RC1 and VS Code started downloading .NET Core Debugger.

image

image

That was a good experience, I didn't dig into how to do that but I am not sure at this point why it didn't come inside the extension itself. There is probably a reason for that but not my priority to dig into right now :)

Try out the Setup

Now, I am ready to blast off with .NET Core. I used dotnet CLI to create a sample project by typing "dotnet new --type=console" and opened up project folder with VS Code. As soon as VS Code is launched, it asked me to set the stage.

image

Which got me a few new files under .vscode folder.

image

I jumped into the debug pane, selected the correct option and hit the play button after putting a breakpoint inside the Program.cs file.

image

Boom! I am in business.

image

Now, I am moving to code changes which will involve more hair pulling I suppose.

Resources

Microsoft Build 2016 in a Nutshell

Two weeks ago, I had an amazing opportunity to be at Microsoft Build Conference in San Francisco and I would like to share my experience about the conference with you in this post by highlighting what has happened and giving you my personal takeaways.
2016-04-09 18:59
Tugberk Ugurlu


Two weeks ago, I had an amazing opportunity to be at Microsoft Build Conference in San Francisco as an attendee thanks to my amazing company Redgate. The experience was truly unique and amount of people I have met there was huge. A bit late but I would like to share my experience about the conference with you in this post by highlighting what has happened and giving you my personal takeaways. You can also check out my tweets for the Build conference.

CeznGcRWAAE_QMw

Announcements

There were bunch of big and small announcements throughout the conference from Microsoft. Some of these were highlighted during two keynotes and some other announcements were spread to three days. I tried to capture all of them here but it's very likely I missed some of them (mostly the small ones):

Ce46IwXUsAA0Wil

Sessions

2016-03-31 15.28.14

Here is the list of sessions I have attended:

As much as I wanted to attend some other sessions, I missed some of them mostly due to clashes with other sessions. Luckily recordings for all Build 2016 sessions are available up on Channel 9. Here is my list of sessions to catch up:

There were also many good Channel 9 Live interviews. You can find them here. Here is a personal list of a few which are worth listening to:

IMG_1870

Personal Takeaways

All in all it has been a great conference and as stated, I am still catching up on the stuff that I have missed. Throughout the conference, I have picked up a few key points and I want to end the post with those:

  • I have seen more from Microsoft to make developers lives easier and more productive by enabling new tools (Bash on Ubuntu on Windows), supporting multiple platforms (Service Fabric to run on every environment including AWS, on-premises, Azure Stack and preview of Service Fabric on Linux), open sourcing more (some parts of Xamarin have gone open source) and making existing paid tools available for free (Xamarin is now free).
  • Microsoft is more focused on getting their existing services together and trying to give a cohesive ecosystem for developers. Service Fabric, Cognitive Services, Data Lake is a few examples of this.
  • .NET Core and CoreCLR is approaching to finalization for v1. After RC2, I don't suppose there will be much more features added or concepts changing.
  • I think this is the first time I have seen stabilization on client Apps story for Microsoft. Universal Windows Platform (UWP) was the focus on this area this year and it was the same on previous year.
  • I am absolutely happy to see Microsoft abandoning Windows Phone day by day. There was no direct sessions on it during the conference.
  • There were more steps towards making software to manage people's lives in a better way. Skype Bot Framework was one of these steps.
  • Microsoft (mostly Azure group) invests on IoT solutions heavily. Azure Functions and new updates on Azure IoT Suite are just a few signs of this.
  • Azure Resource Manager (ARM) and ARM templates are getting a lot of love from Microsoft and it's the way they push forward. They even build new services on Azure on top of this.

Having a Look at dotnet CLI Tool and .NET Native Compilation in Linux

dotnet CLI tool can be used for building .NET Core apps and for building libraries through your development flow (compiling, NuGet package management, running, testing, etc.) on various operating systems. Today, I will be looking at this tool in Linux, specifically its native compilation feature.
2016-01-03 18:20
Tugberk Ugurlu


I have been following ASP.NET 5 development from the very start and it has been an amazing experience so far. This new platform has seen so many changes both on libraries and concepts throughout but the biggest of all is about to come. The new command line tools that ASP.NET 5 brought to us like dnx and dnu will vanish soon. However, this doesn’t mean that we won’t have a command line first experience. Concepts of these tools will be carried over by a new command line tool: dotnet CLI.

Note that dotnet CLI is not even a beta yet. It’s very natural that some of the stuff that I show below may change or even be removed. So, be cautious.

image

Scott Hanselman gave an amazing introduction to this tool in his blog post. As indicated in that post, new dotnet CLI tool will give a very similar experience to us compared to other platforms like Go, Ruby, Python. This is very important because, again, this will remove another entry barrier for the newcomers.

You can think of this new CLI tool as combination of following three in terms of concepts:

  • csc.exe
  • msbuild.exe
  • nuget.exe

Of course, this is an understatement but it will help you get a grasp of what that tools can do. One other important aspect of the tool is to be able to bootstrap your code and execute it. Here is how:

In order to install dotnet CLI tool into my Ubuntu machine, I just followed the steps on the Getting Started guide for Ubuntu.

image

Step one is to create a project structure. My project has two files under "hello-dotnet" folder. Program.cs:

using System;

namespace ConsoleApplication
{
    public class Program
    {
        public static void Main(string[] args)
        {
            Console.WriteLine("Hello World!");
        }
    }
}

project.json:

{
    "version": "1.0.0-*",
    "compilationOptions": {
        "emitEntryPoint": true
    },

    "dependencies": {
        "Microsoft.NETCore.Runtime": "1.0.1-beta-*",
        "System.IO": "4.0.11-beta-*",
        "System.Console": "4.0.0-beta-*",
        "System.Runtime": "4.0.21-beta-*"
    },

    "frameworks": {
        "dnxcore50": { }
    }
}

These are the bare essentials that I need to get something outputted to my console window. One important piece here is the emitEntryPoint bit inside the project.json file which indicates that the module will have an entry point which is the static Main method by default.

The second step here is to restore the defined dependencies. This can be done through the "dotnet restore" command:

image

Finally, we can now execute the code that we have written and see that we can actually output some text to console. At the same path, just run "dotnet run" command to do that:

image

Very straight forward experience! Let’s just try to compile the code through "dotnet compile" command:

image

Notice the "hello-dotnet" file there. You can think of this file as dnx which can just run your app. It’s basically the bootstrapper just for your application.

image

So, we understand that we can just run this thing:

image

Very nice! However, that’s not all! This is still a .NET application which requires a few things to be in place to be executed. What we can also do here is to compile native, standalone executables (just like you can do with Go).

image

Do you see the "--native" switch? That will allow you to compile the native executable binary which will be specific to the acrhitecture that you are compiling on (in my case, it’s Ubuntu 14.04):

image

"hello-dotnet" file here can be executed same as the previous one but this time, it’s all machine code and everything is embedded (yes, even the .NET runtime). So, it’s very usual that you will see a significant increase in the size:

image

This is a promising start and amazing to see that we have a unified tool to rule them all (famous last words). The name of the tool is also great, it makes it straight forward to understand based on your experiences with other platforms and seeing this type of command line first architecture adopted outside of ASP.NET is also great and will bring consistency throughout the ecosystem. I will be watching this space as I am sure there will be more to come :)

Resources

My Talk on Profiling .NET Server Applications from Umbraco UK Festival 2015

I was at Umbraco UK Festival 2015 in London a few weeks ago to give a talk on Profiling .NET Server Applications and the session is now available to watch.
2015-11-11 13:13
Tugberk Ugurlu


I was at Umbraco UK Festival 2015 in London a few weeks ago to give a talk on Profiling .NET Server Applications. It was a really great experience for me as this was my first time presenting on this topic which I love and enjoy very much. Also, the conference venue was a church which made it really interesting for a presentation and I would be lying if I tell you that I didn’t feel like a deacon up there on the stage :) The fantastic news is that all sessions were recorded and all of them are available to watch now including my Profiling .NET Server Applications talk:

You can find the slides under my Speaker Deck account and I also encourage you to download the free e-book which gives you 52 quick tips and tricks for .NET application performance:

image

Finally, here are some further resources to look at if the talk was interesting for you:

Tags