NGINX Reverse Proxy and Load Balancing for ASP.NET 5 Applications with Docker Compose

In this post, I want to show you how it would look like to expose ASP.NET 5 through NGINX, provide a simple load balancing mechanism running locally and orchestrate this through Docker Compose.
2016-01-17 14:39
Tugberk Ugurlu


We have a lot of hosting options with ASP.NET 5 under different operating systems and under different web servers like IIS. Filip W has a great blog post on Running ASP.NET 5 website under IIS. Here, I want to show you how it would look like to expose ASP.NET 5 through NGINX, provide a simple load balancing mechanism running locally and orchestrate this through Docker Compose.

It is not like we didn't have these options before in .NET web development world. To give you an example, you can perfectly run an ASP.NET Web API application under mono and expose it to outside world behind NGINX. However, ASP.NET 5 makes these options really straight forward to adopt.

The end result we will achieve here will have the below look and you can see the sample I have put together for this here:

arch-diagram

ASP.NET 5 Application on RC1

For this sample, I have a very simple APS.NET 5 application which gives you an hello message and lists the environment variables available under the machine. The project structure looks like this:

tugberk@ubuntu:~/apps/aspnet-5-samples/nginx-lb-sample$ tree
.
├── docker-compose.yml
├── docker-nginx.dockerfile
├── docker-webapp.dockerfile
├── global.json
├── nginx.conf
├── NuGet.Config
├── README.md
└── WebApp
    ├── hosting.json
    ├── project.json
    └── Startup.cs

I am not going to put the application code here but you can find the entire code here. However, there is one important thing that I want to mention which is the server URL you will expose ASP.NET 5 application through Kestrel. To make Docker happy, we need to expose the application through "0.0.0.0" rather than localhost or 127.0.0.1. Mark Rendle has a great resource on this explaining why and I have the following hosting.json file which also covers this issue:

{
    "server": "Microsoft.AspNet.Server.Kestrel",
    "server.urls": "http://0.0.0.0:5090"
}

Running ASP.NET 5 Application under Docker

The next step is to run the ASP.NET 5 application under Docker. With the ASP.NET Docker image on Docker Hub, this is insanely simple. Again, Mark Rendle has three amazing posts on ASP.NET 5, Docker and Linux combination as Part 1, Part 2 and Part 3. I strongly encourage you to check them out. For my sample here, I have the below Dockerfile (reference to the file):

FROM microsoft/aspnet:1.0.0-rc1-update1

COPY ./WebApp/project.json /app/WebApp/
COPY ./NuGet.Config /app/
COPY ./global.json /app/
WORKDIR /app/WebApp
RUN ["dnu", "restore"]
ADD ./WebApp /app/WebApp/

EXPOSE 5090
ENTRYPOINT ["dnx", "run"]

That's all I need to be able to run my ASP.NET 5 application under Docker. What I can do now is to build the Docker image and run it:

docker build -f docker-webapp.dockerfile -t hellowebapp .
docker run -d -p 5090:5090 hellowebapp

The container now running in a detached mode and you should be able to hit the HTTP endpoint from your host:

image

From there, you do whatever you want to the container. Rebuild it, stop it, remove it, so and so forth.

NGINX and Docker Compose

Last pieces of the puzzle here are NGINX and Docker Compose. For those of who don't know what NGINX is: NGINX is a free, open-source, high-performance HTTP server and reverse proxy. Under production, you really don't want to expose Kestrel to outside world directly. Instead, you should put Kestrel behind a mature web server like NGINX, IIS or Apache Web Server.

There are two great videos you can watch on Kestrel and Linux hosting which gives you the reasons why you should put Kestrel behind a web server. I strongly encourage you to check them out before putting your application on production in Linux.

Docker Compose, on the other hand, is a completely different type of tool. It is a tool for defining and running multi-container Docker applications. With Compose, you use a Compose file (which is a YAML file) to configure your application’s services. This is a perfect fit for what we want to achieve here since we will have at least three containers running:

  • ASP.NET 5 application 1 Container: An instance of the ASP.NET 5 application
  • ASP.NET 5 application 2 Container: Another instance of the ASP.NET 5 application
  • NGINX Container: An NGINX process which will proxy the requests to ASP.NET 5 applications.

Let's start with configuring NGINX first and make it possible to run under Docker. This is going to very easy as NGINX also has an image up on Docker Hub. We will use this image and tell NGINX to read our config file which looks like this:

worker_processes 4;

events { worker_connections 1024; }

http {
    upstream web-app {
        server webapp1:5090;
        server webapp2:5090;
    }

    server {
      listen 80;

      location / {
        proxy_pass http://web-app;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection keep-alive;
        proxy_set_header Host $host;
        proxy_cache_bypass $http_upgrade;
      }
    }
}

This configuration file has some generic stuff in it but the most importantly, it has our load balancing and reverse proxy configuration. This configuration tells NGINX to accept requests on port 80 and proxy those requests to webapp1:5090 and webapp2:5090. Check out the NGINX reverse proxy guide and load balancing guide for more information about how you can customize the way you are doing the proxying and load balancing but the above configuration is enough for this sample.

There is also an important part in this NGINX configuration to make Kestrel happy. Kestrel has an annoying bug in RC1 which has been already fixed for RC2. To work around the issue, you need to set "Connection: keep-alive" header which is what we are doing with "proxy_set_header Connection keep-alive;" declaration in our NGINX configuration.

Here is what NGINX Dockerfile looks like (reference to the file):

FROM nginx
COPY ./nginx.conf /etc/nginx/nginx.conf

You might wonder at this point about what webapp1 and webapp2 (which we have indicated inside the NGINX configuration file) map to. These are the DNS references for the containers which will run our ASP.NET 5 applications and when we link them in our Docker Compose file, the DNS mapping will happen automatically for container names. Finally, here is what out composition looks like inside the Docker Compose file (reference to the file):

webapp1:
  build: .
  dockerfile: docker-webapp.dockerfile
  container_name: hasample_webapp1
  ports:
    - "5091:5090"
    
webapp2:
  build: .
  dockerfile: docker-webapp.dockerfile
  container_name: hasample_webapp2
  ports:
    - "5092:5090"

nginx:
  build: .
  dockerfile: docker-nginx.dockerfile
  container_name: hasample_nginx
  ports:
    - "5000:80"
  links:
    - webapp1
    - webapp2

You can see under the third container definition, we linked previously defined two containers to NGINX container. Alternatively, you may want to look at Service Discovery in context of Docker instead of linking.

Now we have everything in place and all we need to do is to run two docker-compose commands (under the directory where we have Docker Compose file) to get the application up and running:

docker-compose build
docker-compose up

After these, we should see three containers running. We should also be able to hit localhost:5000 from the host machine and see that the load is being distributed to both ASP.NET 5 application containers:

compose2

Pretty great! However, this is just sample for demo purposes to show how simple it is to have an environment like this up an running locally. This probably provides no performance gains when you run all containers in one box. My next step is going to be to get HAProxy in this mix and let it do the load balancing instead.

My Summary of CodeMash 2016

I had the pleasure of attending CodeMash this year to give two talks on ASP.NET 5 and Database Lifecycle Management. Here is my summary of the conference and references to resourses I used on my talks.
2016-01-11 11:05
Tugberk Ugurlu


I had the pleasure of attending CodeMash this year to give two talks. The conference was generally good, there were lots of people from the several parts of the World and I got to meet a lot of smart and amazing people.

2016-01-08 17.08.27

I only had a chance to attend the last two days of the conference and during those days, I attended several talks and missed bunch of good ones (and there are no recordings which makes me sad):

I learnt some new things in every sessions which is a great feeling. Especially, the Jennifer Marsman’s session on combination of EEG + Machine Learning + Lie Detection was absolutely mind-blowing to watch. I don’t think I have blinked during the entire session :)

2016-01-07 14.29.14

Here are a few things I took away from the conference by attending sessions and talking to people:

  • Polyglot persistence is a general topic of interest and people are leaning towards this road.
  • People try to apply or understand Microservices and its benefits.
  • Docker makes the above two approaches easy to adopt and people are aware of that.
  • Lots of concern around how to move to ASP.NET 5, especially to .NET Core.
  • Machine learning opens up a lot of interesting possibilities for IoT products and service based solutions.
  • Migrations based approach is definitely a must to have on the DLM process.

Lots of these are nice to see since they validated some of my thoughts and confirmed that I am on the right track. Some of them gave me new excitements and it didn’t take long to accept the challenges :)

The best part of the conference was that I had a chance to meet lots of new people and put a face on some that I have known through Twitter like Matt Johnson, Barry Dorrans, Darrel Miller and lots of other amazing people.

As mentioned in my previous post, I gave two talks on ASP.NET 5 and Database Lifecycle Management. I made sure that the resources I have shown are available online. CodeMash organizers maintain a GitHub repository for resources of all the sessions. I have put ASP.NET 5 talk resources and DLM talk resources there, too.

ASP.NET 5: Getting Your Cheese Back

CYM-0vEWkAAgDhE

Slides for this talk are available under my Speaker Deck account.

You can also find the samples I used during the session under the aspnet-5-samples GitHub repository (permalink to the version used during the presentation). I also showed another sample which made use of Docker and Docker Compose: ModernShopping.

Database Lifecycle Management: Getting it Right

2016-01-08 10.56.53

Again, slides for this talk is also available under my Speaker Deck account.

The demo application I have used during the session is available here (permalink to the version used during the presentation).

Conclusion

Overall, it was a great experience to be there. I want to thank CodeMash organizers for inviting me to speak at the conference. It was a really valuable opportunity for me to stand in front of that amazing crowd. I also want to thank Redgate and for covering my travel expenses and sparing me for the time of the conference. I want to emphasize again that Redgate is an amazing company to be part of!

Redgate has several opportunities that might fit you. I highly encourage you to check them out.

I want to end this post with a reference to a tweet which shows the message David Neal gave at the end of his talk (which was the last talk of CodeMash):

Having a Look at dotnet CLI Tool and .NET Native Compilation in Linux

dotnet CLI tool can be used for building .NET Core apps and for building libraries through your development flow (compiling, NuGet package management, running, testing, etc.) on various operating systems. Today, I will be looking at this tool in Linux, specifically its native compilation feature.
2016-01-03 18:20
Tugberk Ugurlu


I have been following ASP.NET 5 development from the very start and it has been an amazing experience so far. This new platform has seen so many changes both on libraries and concepts throughout but the biggest of all is about to come. The new command line tools that ASP.NET 5 brought to us like dnx and dnu will vanish soon. However, this doesn’t mean that we won’t have a command line first experience. Concepts of these tools will be carried over by a new command line tool: dotnet CLI.

Note that dotnet CLI is not even a beta yet. It’s very natural that some of the stuff that I show below may change or even be removed. So, be cautious.

image

Scott Hanselman gave an amazing introduction to this tool in his blog post. As indicated in that post, new dotnet CLI tool will give a very similar experience to us compared to other platforms like Go, Ruby, Python. This is very important because, again, this will remove another entry barrier for the newcomers.

You can think of this new CLI tool as combination of following three in terms of concepts:

  • csc.exe
  • msbuild.exe
  • nuget.exe

Of course, this is an understatement but it will help you get a grasp of what that tools can do. One other important aspect of the tool is to be able to bootstrap your code and execute it. Here is how:

In order to install dotnet CLI tool into my Ubuntu machine, I just followed the steps on the Getting Started guide for Ubuntu.

image

Step one is to create a project structure. My project has two files under "hello-dotnet" folder. Program.cs:

using System;

namespace ConsoleApplication
{
    public class Program
    {
        public static void Main(string[] args)
        {
            Console.WriteLine("Hello World!");
        }
    }
}

project.json:

{
    "version": "1.0.0-*",
    "compilationOptions": {
        "emitEntryPoint": true
    },

    "dependencies": {
        "Microsoft.NETCore.Runtime": "1.0.1-beta-*",
        "System.IO": "4.0.11-beta-*",
        "System.Console": "4.0.0-beta-*",
        "System.Runtime": "4.0.21-beta-*"
    },

    "frameworks": {
        "dnxcore50": { }
    }
}

These are the bare essentials that I need to get something outputted to my console window. One important piece here is the emitEntryPoint bit inside the project.json file which indicates that the module will have an entry point which is the static Main method by default.

The second step here is to restore the defined dependencies. This can be done through the "dotnet restore" command:

image

Finally, we can now execute the code that we have written and see that we can actually output some text to console. At the same path, just run "dotnet run" command to do that:

image

Very straight forward experience! Let’s just try to compile the code through "dotnet compile" command:

image

Notice the "hello-dotnet" file there. You can think of this file as dnx which can just run your app. It’s basically the bootstrapper just for your application.

image

So, we understand that we can just run this thing:

image

Very nice! However, that’s not all! This is still a .NET application which requires a few things to be in place to be executed. What we can also do here is to compile native, standalone executables (just like you can do with Go).

image

Do you see the "--native" switch? That will allow you to compile the native executable binary which will be specific to the acrhitecture that you are compiling on (in my case, it’s Ubuntu 14.04):

image

"hello-dotnet" file here can be executed same as the previous one but this time, it’s all machine code and everything is embedded (yes, even the .NET runtime). So, it’s very usual that you will see a significant increase in the size:

image

This is a promising start and amazing to see that we have a unified tool to rule them all (famous last words). The name of the tool is also great, it makes it straight forward to understand based on your experiences with other platforms and seeing this type of command line first architecture adopted outside of ASP.NET is also great and will bring consistency throughout the ecosystem. I will be watching this space as I am sure there will be more to come :)

Resources

Speaking at CodeMash 2016 in Sandusky, Ohio

I will be speaking at CodeMash 2016 in Sandusky, Ohio and I will be talking about ASP.NET 5 and Database Lifecycle Management. I hope to see some of you there :)
2016-01-03 13:58
Tugberk Ugurlu


Attending a developer conference is an amazing way to start a new year. I love conferences because it’s where I learn the most. I am a believer of experience driven life and the conferences is the best place where you can learn about other people’s experiences and different cultures. 2016 will start for me with CodeMash, a unique event that will educate developers on current practices, methodologies, and technology trends in a variety of platforms and development languages such as Java, .NET, Ruby, Python and PHP. The conference will be held in Sandusky, Ohio between 5th and 8th of January, 2016. I will be talking about two topics at the conference and both of those talks will be held on the last day of the conference, 8th of January, Friday.

image

First one is at 08:30 AM: ASP.NET 5: How to Get Your Cheese Back. This is mainly targeted for people who are interested in learning what are the main reasons to adopt ASP.NET 5 and what it will bring to the table. Even if you haven’t done any .NET development before, you will still find a lot of interesting things here as I believe that one of the biggest advantage of the new .NET ecosystem is that there are no big entry barriers for newcommers anymore.

My last talk is going to be on DLM (Database Lifecycle Management), at 11:00 AM: Database Lifecycle Management: Getting it Right. If you are working with an RDBMS in your daily job and want to automate the release process of the changes, you will definitely find something valuable in this talk. I will be mainly giving the examples for SQL Server changes by using DLM Automation Suite tools but this talk is not about tools and SQL Server. It’s all about concepts and challenges of managing the lifecycle of the database schema.

Unfortunately, I will be able to only attend the last two days of the conference but that’s better than nothing. I am so excited about the conference and if you are going to be there, ping me (through Twitter, LinkedIn, etc.) to have a chat and meet. See you there :)

Getting Started with Neo4j in .NET with Neo4jClient Library

I have been looking into Neo4j, a graph database, for a while and here is what impressed me the most while trying to work with it through the Neo4jClient .NET library.
2015-12-13 19:07
Tugberk Ugurlu


I am really in love with the side project I am working on now. It is broken down to little "micro" applications (a.k.a. microservices), uses multiple data storage technologies and being brought together through Docker. As a result, the entire solution feels very natural, not-restricted and feels so manageable.

One part of this solution requires to answer a question which involves going very deep inside the data hierarchy. To illustrate what I mean, have a look at the below graph:

movies-with-only-agency-employees-2

Here, we have an agency which has acquired some actors. Also, we have some movies which employed some actors. You can model this in various data storage systems in various ways but the question I want to answer is the following: "What are the movies which employed all of its actors from Agency-A?". Even thinking about the query you would write in T-SQL is enough to melt your brain for this one. It doesn’t mean that SQL Server, MySQL, etc. are bad data storage systems. It’s just that this type of questions are not among those data storage systems' strengths.

Enters: Neo4j

Neo4j is an open-source graph database implemented in Java and accessible from software written in other languages using the Cypher query language through a transactional HTTP endpoint (Wikipedia says). In Neo4j, your data set consists of nodes and relationships between these nodes which you can interact with through the Cypher query language. Cypher is a very powerful, declarative, SQL-inspired language for describing patterns in graphs. The biggest thing that stands out when working with Cypher is the relationships. Relationships are first class citizens in Cypher. Consider the following Cypher query which is brought from the movie sample in Neo4j web client:

You can bring up the this movie sample by just running ":play movie graph" from the Neo4j web client and walk through it.

MATCH (tom:Person {name: "Tom Hanks"})-[:ACTED_IN]->(tomHanksMovies) RETURN tom,tomHanksMovies

This will list all Tom Hanks movies. However, when you read it from left to right, you will pretty much understand what it will do anyway. The interesting part here is the ACTED_IN relationship inside the query. You may think at this point that this is not a big deal as it can probably translate the below T-SQL query:

SELECT * FROM Movies m
INNER JOIN MovieActors ma ON ma.MovieId = m.Id
WHERE ma.ActorId = 1;

However, you will start seeing the power as the questions get interesting. For example, let’s find out Tom Hanks’ co-actors from the every movie he acted in (again, from the same sample):

MATCH (tom:Person {name:"Tom Hanks"})-[:ACTED_IN]->(m)<-[:ACTED_IN]-(coActors) RETURN coActors.name

It’s just mind-blowingly complicated to retrieve this from a relational database but with Cypher, it is dead easy. You can start to see that it’s all about building up nodes and declaring the relationships to get the answer to your question in Neo4j.

Neo4j in .NET

As Neo4j communicates through HTTP, you can pretty much find a client implementation in every ecosystem and .NET is not an exception. Amazing people from Readify is maintaining Neo4jClient OSS project. It’s extremely easy to use this and the library has a very good documentation. I especially liked the part where they have documented the thread safety concerns of GraphClient. It is the first thing I wanted to find out and there it was.

Going back to my example which I mentioned at the beginning of this post, I tried to handle this through the .NET Client. Let’s walk through what I did.

You can find the below sample under my DotNetSamples GitHub repository.

First, I initiated the GraphClient and made some adjustments:

var client = new GraphClient(new Uri("http://localhost:7474/db/data"), "neo4j", "1234567890")
{
    JsonContractResolver = new CamelCasePropertyNamesContractResolver()
};

client.Connect();

I started with creating the agency.

var agencyA = new Agency { Name = "Agency-A" };
client.Cypher
    .Create("(agency:Agency {agencyA})")
    .WithParam("agencyA", agencyA)
    .ExecuteWithoutResultsAsync()
    .Wait();

Next is to create the actors and ACQUIRED relationship between the agency and some actors (in below case, only the odd numbered actors):

for (int i = 1; i <= 5; i++)
{
    var actor = new Person { Name = $"Actor-{i}" };

    if ((i % 2) == 0)
    {
        client.Cypher
            .Create("(actor:Person {newActor})")
            .WithParam("newActor", actor)
            .ExecuteWithoutResultsAsync()
            .Wait();
    }
    else
    {
        client.Cypher
            .Match("(agency:Agency)")
            .Where((Agency agency) => agency.Name == agencyA.Name)
            .Create("agency-[:ACQUIRED]->(actor:Person {newActor})")
            .WithParam("newActor", actor)
            .ExecuteWithoutResultsAsync()
            .Wait();
    }
}

Then, I have created the movies :

char[] chars = Enumerable.Range('a', 'z' - 'a' + 1).Select(i => (Char)i).ToArray();
for (int i = 0; i < 3; i++)
{
    var movie = new Movie { Name = $"Movie-{chars[i]}" };

    client.Cypher
        .Create("(movie:Movie {newMovie})")
        .WithParam("newMovie", movie)
        .ExecuteWithoutResultsAsync()
        .Wait();
}

Lastly, I have related existing movies and actors through the EMPLOYED relationship.

client.Cypher
    .Match("(movie:Movie)", "(actor1:Person)", "(actor5:Person)")
    .Where((Movie movie) => movie.Name == "Movie-a")
    .AndWhere((Person actor1) => actor1.Name == "Actor-1")
    .AndWhere((Person actor5) => actor5.Name == "Actor-5")
    .Create("(movie)-[:EMPLOYED]->(actor1), (movie)-[:EMPLOYED]->(actor5)")
    .ExecuteWithoutResultsAsync()
    .Wait();

client.Cypher
    .Match("(movie:Movie)", "(actor1:Person)", "(actor3:Person)", "(actor5:Person)")
    .Where((Movie movie) => movie.Name == "Movie-b")
    .AndWhere((Person actor1) => actor1.Name == "Actor-1")
    .AndWhere((Person actor3) => actor3.Name == "Actor-3")
    .AndWhere((Person actor5) => actor5.Name == "Actor-5")
    .Create("(movie)-[:EMPLOYED]->(actor1), (movie)-[:EMPLOYED]->(actor3), (movie)-[:EMPLOYED]->(actor5)")
    .ExecuteWithoutResultsAsync()
    .Wait();

client.Cypher
    .Match("(movie:Movie)", "(actor2:Person)", "(actor5:Person)")
    .Where((Movie movie) => movie.Name == "Movie-c")
    .AndWhere((Person actor2) => actor2.Name == "Actor-2")
    .AndWhere((Person actor5) => actor5.Name == "Actor-5")
    .Create("(movie)-[:EMPLOYED]->(actor2), (movie)-[:EMPLOYED]->(actor5)")
    .ExecuteWithoutResultsAsync()
    .Wait();

When I run this, I now have the data set that I can play with. I have jumped back to web client and ran the below query to retrieve the relations:

MATCH (agency:Agency)-[:ACQUIRED]->(actor:Person)<-[:EMPLOYED]-(movie:Movie)
RETURN agency, actor, movie

One of the greatest features of the web client is that you can view your query result in a graph representation. How cool is that? You can exactly see the smilarity between the below result and the graph I have put together above:

image

Of course, we can run the same above query through the .NET client and grab the results:

var results = client.Cypher
    .Match("(agency:Agency)-[:ACQUIRED]->(actor:Person)<-[:EMPLOYED]-(movie:Movie)")
    .Return((agency, actor, movie) => new
    {
        Agency = agency.As<Agency>(),
        Actor = actor.As<Person>(),
        Movie = movie.As<Movie>()
    }).Results;

Going Beyond

However, how can we answer my "What are the movies which employed all of its actors from Agency-A?" question? As I am very new to Neo4j, I struggled a lot with this. In fact, I was not even sure whether this was possible to do in Neo4J. I asked this as a question in Stackoverflow (as every stuck developer do) and Christophe Willemsen gave an amazing answer which literally blew my mind. I warn you now as the below query is a bit complex and I am still going through it piece by piece to try to understand it but it does the trick:

MATCH (agency:Agency { name:"Agency-A" })-[:ACQUIRED]->(actor:Person)<-[:EMPLOYED]-(movie:Movie)
WITH DISTINCT movie, collect(actor) AS actors
MATCH (movie)-[:EMPLOYED]->(allemployees:Person)
WITH movie, actors, count(allemployees) AS c
WHERE c = size(actors)
RETURN movie.name

The result is as you would expect:

image

Still Dipping My Toes

I am hooked but this doesn’t mean that Neo4j is the solution to my problems. I am still evaluating it by implementing a few features on top of it. There are a few parts which I haven’t been able to answer exactly yet:

  • How does this scale with large data sets?
  • Can I shard the data across servers?
  • Want are the hosted options?
  • What is the story on geo location queries?

However, the architecture I have in my solution allows me to evaluate this type of technologies. At worst case scenario, Neo4j will not work for me but I will be able to replace it with something else (which I doubt that it will be the case).

Resources

Tags