Sorted By: Tag (docker)

ASP.NET Core Authentication in a Load Balanced Environment with HAProxy and Redis

Token based authentication is a fairly common way of authenticating a user for an HTTP application. However, handling this in a load balanced environment has always involved extra caring. In this post, I will show you how this is handled in ASP.NET Core by demonstrating it with HAProxy and Redis through the help of Docker.
2016-11-28 23:31
Tugberk Ugurlu


Token based authentication is a fairly common way of authenticating a user for an HTTP application. ASP.NET and its frameworks had support for implementing this out of the box without much effort with different type of authentication approaches such as cookie based authentication, bearer token authentication, etc. ASP.NET Core is a no exception to this and it got even better (which we will see in a while).

However, handling this in a load balanced environment has always involved extra caring as all of the nodes should be able to read the valid authentication token even if that token has been written by another node. Old-school ASP.NET solution to this is to keep the Machine Key in sync with all the nodes. Machine key, for those who are not familiar with it, is used to encrypt and decrypt the authentication tokens under ASP.NET and each machine by default has its own unique one. However, you can override this and put your own one in place per application through a setting inside the Web.config file. This approach had its own problems and with ASP.NET Core, all data protection APIs have been revamped which cleared a room for big improvements in this area such as key expiration and rolling, key encryption at rest, etc. One of those improvements is the ability to store keys in different storage systems, which is what I am going to touch on in this post.

The Problem

Imagine a case where we have an ASP.NET Core application which uses cookie based authentication and stores their user data in MongoDB, which has been implemented using ASP.NET Core Identity and its MongoDB provider.

1-ok-wo-lb

This setup is all fine and our application should function perfectly. However, if we put this application behind HAProxy and scale it up to two nodes, we will start seeing problems like below:

System.Security.Cryptography.CryptographicException: The key {3470d9c3-e59d-4cd8-8668-56ba709e759d} was not found in the key ring.
   at Microsoft.AspNetCore.DataProtection.KeyManagement.KeyRingBasedDataProtector.UnprotectCore(Byte[] protectedData, Boolean allowOperationsOnRevokedKeys, UnprotectStatus& status)
   at Microsoft.AspNetCore.DataProtection.KeyManagement.KeyRingBasedDataProtector.DangerousUnprotect(Byte[] protectedData, Boolean ignoreRevocationErrors, Boolean& requiresMigration, Boolean& wasRevoked)
   at Microsoft.AspNetCore.DataProtection.KeyManagement.KeyRingBasedDataProtector.Unprotect(Byte[] protectedData)
   at Microsoft.AspNetCore.Antiforgery.Internal.DefaultAntiforgeryTokenSerializer.Deserialize(String serializedToken)

Let’s look at the below diagram to understand why we are having this problem:

2-not-ok-w-lb

By default, ASP.NET Core Data Protection is wired up to store its keys under the file system. If you have your application running under multiple nodes as shown in above diagram, each node will have its own keys to protect and unprotect the sensitive information like authentication cookie data. As you can guess, this behaviour is problematic with the above structure since one node cannot read the protected data which the other node protected.

The Solution

As I mentioned before, one of the extensibility points of ASP.NET Core Data Protection stack is the storage of the data protection keys. This place can be a central place where all the nodes of our web application can reach out to. Let’s look at the below diagram to understand what we mean by this:

4-ok-w-lb

Here, we have Redis as our Data Protection key storage. Redis is a good choice here as it’s a well-suited for key-value storage and that’s what we need. With this setup, it will be possible for both nodes of our application to read protected data regardless of which node has written it.

Wiring up Redis Data Protection Key Storage

With ASP.NET Core 1.0.0, we had to write the implementation by ourselves to make ASP.NET Core to store Data Protection keys on Redis but with 1.1.0 release, the team has simultaneously shipped a NuGet package which makes it really easy to wire this up: Microsoft.AspNetCore.DataProtection.Redis. This package easily allows us to swap the data protection storage destination to be Redis. We can do this while we are configuring services as part of ConfigureServices:

public void ConfigureServices(IServiceCollection services)
{
    // sad but a giant hack :(
    // https://github.com/StackExchange/StackExchange.Redis/issues/410#issuecomment-220829614
    var redisHost = Configuration.GetValue<string>("Redis:Host");
    var redisPort = Configuration.GetValue<int>("Redis:Port");
    var redisIpAddress = Dns.GetHostEntryAsync(redisHost).Result.AddressList.Last();
    var redis = ConnectionMultiplexer.Connect($"{redisIpAddress}:{redisPort}");

    services.AddDataProtection().PersistKeysToRedis(redis, "DataProtection-Keys");
    services.AddOptions();

    // ...
}

I have wired it up exactly like this in my sample application in order to show you a working example. It’s an example taken from ASP.NET Identity repository but slightly changed to make it work with MongoDB Identity store provider.

Note here that configuration values above are specific to my implementation and it doesn’t have to be that way. See these lines inside my Docker Compose file and these inside my Startup class to understand how it’s being passed and hooked up.

The sample application can be run on Docker through Docker Compose and it will get a few things up and running:

  • Two nodes of the application
  • A MongoDB instance
  • A Redis instance

Image

You can see my docker-compose.yml file to understand how I hooked things together:

mongo:
    build: .
    dockerfile: mongo.dockerfile
    container_name: haproxy_redis_auth_mongodb
    ports:
      - "27017:27017"

redis:
    build: .
    dockerfile: redis.dockerfile
    container_name: haproxy_redis_auth_redis
    ports:
      - "6379:6379"

webapp1:
    build: .
    dockerfile: app.dockerfile
    container_name: haproxy_redis_auth_webapp1
    environment:
      - ASPNETCORE_ENVIRONMENT=Development
      - ASPNETCORE_server.urls=http://0.0.0.0:6000
      - WebApp_MongoDb__ConnectionString=mongodb://mongo:27017
      - WebApp_Redis__Host=redis
      - WebApp_Redis__Port=6379
    links:
      - mongo
      - redis

webapp2:
    build: .
    dockerfile: app.dockerfile
    container_name: haproxy_redis_auth_webapp2
    environment:
      - ASPNETCORE_ENVIRONMENT=Development
      - ASPNETCORE_server.urls=http://0.0.0.0:6000
      - WebApp_MongoDb__ConnectionString=mongodb://mongo:27017
      - WebApp_Redis__Host=redis
      - WebApp_Redis__Port=6379
    links:
      - mongo
      - redis

app_lb:
    build: .
    dockerfile: haproxy.dockerfile
    container_name: app_lb
    ports:
      - "5000:80"
    links:
      - webapp1
      - webapp2

HAProxy is also configured to balance the load between two application nodes as you can see inside the haproxy.cfg file, which we copy under the relevant path inside our dockerfile:

global
  log 127.0.0.1 local0
  log 127.0.0.1 local1 notice

defaults
  log global
  mode http
  option httplog
  option dontlognull
  timeout connect 5000
  timeout client 10000
  timeout server 10000

frontend balancer
  bind 0.0.0.0:80
  mode http
  default_backend app_nodes

backend app_nodes
  mode http
  balance roundrobin
  option forwardfor
  http-request set-header X-Forwarded-Port %[dst_port]
  http-request set-header Connection keep-alive
  http-request add-header X-Forwarded-Proto https if { ssl_fc }
  option httpchk GET / HTTP/1.1\r\nHost:localhost
  server webapp1 webapp1:6000 check
  server webapp2 webapp2:6000 check

All of these are some details on how I wired up the sample to work. If we now look closely at the header of the web page, you should see the server name written inside the parenthesis. If you refresh enough, you will see that part alternating between two server names:

Image

Image

This confirms that our load is balanced between the two application nodes. The rest of the demo is actually very boring. It should just work as you expect it to work. Go to “Register” page and register for an account, log out and log back in. All of those interactions should just work. If we look inside the Redis instance, we should also see that Data Protection key has been written there:

docker run -it --link haproxy_redis_auth_redis:redis --rm redis redis-cli -h redis -p 6379
LRANGE DataProtection-Keys 0 10

Image

Conclusion and Going Further

I believe that I was able to show you what you need to care about in terms of authentication when you scale our your application nodes to multiple servers. However, do not take my sample as is and apply to your production application :) There are a few important things that suck on my sample, like the fact that my application nodes talk to Redis in an unencrypted fashion. You may want to consider exposing Redis over a proxy which supports encryption.

The other important bit with my implementation is that all of the nodes of my application act as Data Protection key generators. Even if I haven’t seen much problems with this in practice so far, you may want to restrict only one node to be responsible for key generation. You can achieve this by calling DisableAutomaticKeyGeneration like below during the configuration stage on your secondary nodes:

public void ConfigureServices(IServiceCollection services)
{
     services.AddDataProtection().DisableAutomaticKeyGeneration();
}

I would suggest determining whether a node is primary or not through a configuration value so that you can override this through an environment variable for example.

Upcoming Conferences and Talks

I am going to be at a few conferences in upcoming weeks and I would like to share them here with you. If you are going to be around for any of the below events, let's meet and say hi to each other :)
2016-02-27 16:45
Tugberk Ugurlu


I am going to be at a few conferences in upcoming weeks and I would like to share them here with you. Main objective here is to tell you about where I am going to be and this should help meeting new people and learning about different experiences. Jeremy Clark, a friend I met at Codemash 2016, has an amazing blog post on becoming a social developer. I encourage you to check that out to see why and how.

DevConf, Johannesburg (8th of March)

devconf

I am very, very excited about DevConf. Source of this excitement is the talk I will deliver there and the content of the conference. I will be presenting on architecting polyglot-persistent solutions as part of the Persistence and Data track. This is a topic which is very close to my heart as I had the first hand experience while working on Zleek on what a big difference this type of architecture can make on your software product. It will also be the first time I will deliver this talk.

Rest of the agenda also looks pretty impressive. So, I am sure this will be well worth the long trip to South Africa :)

Microsoft Build 2016, San Francisco (30th of March - 1st of April)

image

At the end of March, I will also be in San Francisco to attend Microsoft Build conference. This is also very exciting for several reasons. Obvious one is that there will be a lot of existing and soon-to-be friends there from the community and this is a very developer centric conference in view of Microsoft products. Also, I don't get to attend conferences that often as an attendee. I am sure I will feel the comfort of gliding through the session rooms and not trying to prepare for a talk. If you add the fact that this is going to be my first trip to San Francisco, it will be a real fun :)

Last year, my wish for Build Conference announcements came true with Visual Studio Code and I am very much looking forward to this year's announcements, too.

I T.A.K.E. Unconference 2016, Bucharest (19th - 20th of May)

image

At I T.A.K.E. Unconference 2016, I will talk about two very interesting topics and I think both of them are very interesting considering the type of software solutions and the way we produce them nowadays.

I am very much looking forward both of them since it's going to be the first time that I present these sessions. It seems like it's still possible to register and you can also check the rest of the schedule out here.

If you are going to be around for any of the above events, let's meet and say hi to each other :)

NGINX Reverse Proxy and Load Balancing for ASP.NET 5 Applications with Docker Compose

In this post, I want to show you how it would look like to expose ASP.NET 5 through NGINX, provide a simple load balancing mechanism running locally and orchestrate this through Docker Compose.
2016-01-17 14:39
Tugberk Ugurlu


We have a lot of hosting options with ASP.NET 5 under different operating systems and under different web servers like IIS. Filip W has a great blog post on Running ASP.NET 5 website under IIS. Here, I want to show you how it would look like to expose ASP.NET 5 through NGINX, provide a simple load balancing mechanism running locally and orchestrate this through Docker Compose.

It is not like we didn't have these options before in .NET web development world. To give you an example, you can perfectly run an ASP.NET Web API application under mono and expose it to outside world behind NGINX. However, ASP.NET 5 makes these options really straight forward to adopt.

The end result we will achieve here will have the below look and you can see the sample I have put together for this here:

arch-diagram

ASP.NET 5 Application on RC1

For this sample, I have a very simple APS.NET 5 application which gives you an hello message and lists the environment variables available under the machine. The project structure looks like this:

tugberk@ubuntu:~/apps/aspnet-5-samples/nginx-lb-sample$ tree
.
├── docker-compose.yml
├── docker-nginx.dockerfile
├── docker-webapp.dockerfile
├── global.json
├── nginx.conf
├── NuGet.Config
├── README.md
└── WebApp
    ├── hosting.json
    ├── project.json
    └── Startup.cs

I am not going to put the application code here but you can find the entire code here. However, there is one important thing that I want to mention which is the server URL you will expose ASP.NET 5 application through Kestrel. To make Docker happy, we need to expose the application through "0.0.0.0" rather than localhost or 127.0.0.1. Mark Rendle has a great resource on this explaining why and I have the following hosting.json file which also covers this issue:

{
    "server": "Microsoft.AspNet.Server.Kestrel",
    "server.urls": "http://0.0.0.0:5090"
}

Running ASP.NET 5 Application under Docker

The next step is to run the ASP.NET 5 application under Docker. With the ASP.NET Docker image on Docker Hub, this is insanely simple. Again, Mark Rendle has three amazing posts on ASP.NET 5, Docker and Linux combination as Part 1, Part 2 and Part 3. I strongly encourage you to check them out. For my sample here, I have the below Dockerfile (reference to the file):

FROM microsoft/aspnet:1.0.0-rc1-update1

COPY ./WebApp/project.json /app/WebApp/
COPY ./NuGet.Config /app/
COPY ./global.json /app/
WORKDIR /app/WebApp
RUN ["dnu", "restore"]
ADD ./WebApp /app/WebApp/

EXPOSE 5090
ENTRYPOINT ["dnx", "run"]

That's all I need to be able to run my ASP.NET 5 application under Docker. What I can do now is to build the Docker image and run it:

docker build -f docker-webapp.dockerfile -t hellowebapp .
docker run -d -p 5090:5090 hellowebapp

The container now running in a detached mode and you should be able to hit the HTTP endpoint from your host:

image

From there, you do whatever you want to the container. Rebuild it, stop it, remove it, so and so forth.

NGINX and Docker Compose

Last pieces of the puzzle here are NGINX and Docker Compose. For those of who don't know what NGINX is: NGINX is a free, open-source, high-performance HTTP server and reverse proxy. Under production, you really don't want to expose Kestrel to outside world directly. Instead, you should put Kestrel behind a mature web server like NGINX, IIS or Apache Web Server.

There are two great videos you can watch on Kestrel and Linux hosting which gives you the reasons why you should put Kestrel behind a web server. I strongly encourage you to check them out before putting your application on production in Linux.

Docker Compose, on the other hand, is a completely different type of tool. It is a tool for defining and running multi-container Docker applications. With Compose, you use a Compose file (which is a YAML file) to configure your application’s services. This is a perfect fit for what we want to achieve here since we will have at least three containers running:

  • ASP.NET 5 application 1 Container: An instance of the ASP.NET 5 application
  • ASP.NET 5 application 2 Container: Another instance of the ASP.NET 5 application
  • NGINX Container: An NGINX process which will proxy the requests to ASP.NET 5 applications.

Let's start with configuring NGINX first and make it possible to run under Docker. This is going to very easy as NGINX also has an image up on Docker Hub. We will use this image and tell NGINX to read our config file which looks like this:

worker_processes 4;

events { worker_connections 1024; }

http {
    upstream web-app {
        server webapp1:5090;
        server webapp2:5090;
    }

    server {
      listen 80;

      location / {
        proxy_pass http://web-app;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection keep-alive;
        proxy_set_header Host $host;
        proxy_cache_bypass $http_upgrade;
      }
    }
}

This configuration file has some generic stuff in it but the most importantly, it has our load balancing and reverse proxy configuration. This configuration tells NGINX to accept requests on port 80 and proxy those requests to webapp1:5090 and webapp2:5090. Check out the NGINX reverse proxy guide and load balancing guide for more information about how you can customize the way you are doing the proxying and load balancing but the above configuration is enough for this sample.

There is also an important part in this NGINX configuration to make Kestrel happy. Kestrel has an annoying bug in RC1 which has been already fixed for RC2. To work around the issue, you need to set "Connection: keep-alive" header which is what we are doing with "proxy_set_header Connection keep-alive;" declaration in our NGINX configuration.

Here is what NGINX Dockerfile looks like (reference to the file):

FROM nginx
COPY ./nginx.conf /etc/nginx/nginx.conf

You might wonder at this point about what webapp1 and webapp2 (which we have indicated inside the NGINX configuration file) map to. These are the DNS references for the containers which will run our ASP.NET 5 applications and when we link them in our Docker Compose file, the DNS mapping will happen automatically for container names. Finally, here is what out composition looks like inside the Docker Compose file (reference to the file):

webapp1:
  build: .
  dockerfile: docker-webapp.dockerfile
  container_name: hasample_webapp1
  ports:
    - "5091:5090"
    
webapp2:
  build: .
  dockerfile: docker-webapp.dockerfile
  container_name: hasample_webapp2
  ports:
    - "5092:5090"

nginx:
  build: .
  dockerfile: docker-nginx.dockerfile
  container_name: hasample_nginx
  ports:
    - "5000:80"
  links:
    - webapp1
    - webapp2

You can see under the third container definition, we linked previously defined two containers to NGINX container. Alternatively, you may want to look at Service Discovery in context of Docker instead of linking.

Now we have everything in place and all we need to do is to run two docker-compose commands (under the directory where we have Docker Compose file) to get the application up and running:

docker-compose build
docker-compose up

After these, we should see three containers running. We should also be able to hit localhost:5000 from the host machine and see that the load is being distributed to both ASP.NET 5 application containers:

compose2

Pretty great! However, this is just sample for demo purposes to show how simple it is to have an environment like this up an running locally. This probably provides no performance gains when you run all containers in one box. My next step is going to be to get HAProxy in this mix and let it do the load balancing instead.

Playing Around with Docker: Hello World, Development Environment and Your Application

I have been also looking into Docker for a while now. In this post, I am planning to cover what made me love Docker and where it shines for me.
2015-07-13 19:16
Tugberk Ugurlu


When you have a urge to blog about something, it's mostly the case that you have just learnt something :) Well, it's the case for me now. I have been looking into Linux space for a while now and I didn't expect it to go this smooth. It is wonderful and I must admit that I have been missing on too many nice stuff by sticking with Windows for my development environment. However, I have one thing, one big thing that I have no regret at all: .NET ecosystem. It's one of the great ecosystems to write application on top and as it's now so easy to get it into non-Windows environments, the entry door for those environments are now wide open to me.

I have been also looking into Docker for a while now and I was mostly trying to understand the concept as it's so distant to you if you were only developing on Windows for long time. After the concept was clear, the fun part started to take place. In this post, I am planning to cover that part and show you what made me love Docker.

Why Docker

Here is why I think Docker is useful for me (not much different than the other people's reasons):

  • Repeatable, declarative environments. This can get much better with Docker compose for your development, CI, QA (a.k.a. your Pre-Production) environments. 
  • Read, try and tear down when you are learning a new tool like Redis, RabbitMQ, etc. Just run the docker run command and create the container. Play with the tools on that container and remove the container at the end.
  • One way of deploying stuff. AWS, Azure, whatever. Wherever you go, you will use the same script to deploy your stuff.
  • Shifting your thinking to modularize the hell out of your solution (microservices, there I said it). This can open up insane opportunities. For example, developing each part of your application with the stack that is suitable for the job. Not only you will preserve your sanity by using the right tool but also you will have different parts inside your solution which can be developed separately by separate people who has different skill sets. I strongly suggest the .NET Rocks podcast on Building Microservices with Howard Dierking to understand more about this.
  • I am not sure about this but Docker also makes it really trivial for people to dockerize repro environments for issues.

There will be possibly more by these are the stuff that made me love Docker.

Hello World

I will assume that you installed Docker and you are ready to go. In my case here, I am using Ubuntu 14.04 LTS but I assume it should be the same on OS X as well.

As you can expect from Docker, the "Hello World" example is also declared and packaged up (A.K.A. dockerized). To get the "Hello World" example running, just run the below command:

docker run ubuntu:14.04 /bin/echo 'Hello world'

What happens when you do this is explained under Docker Hello World docs but briefly, you have a container based on Ubuntu 14.04 image and ran echo 'Hello World' in it and exited.

Screenshot from 2015-07-13 06^%10^%46

As mentioned, this container will not live after the echo is finished its job but it is still there to run. If you run the below command, you will see that container is there:

docker ps -a

Screenshot from 2015-07-13 06^%11^%51

We start the container by running the following command based on the container ID we retrieved from the docker ps output:

docker start --attach 6a174ac370a2

Screenshot from 2015-07-13 06^%17^%28

We also used --attach switch here to attach STDOUT/STDERR and forward signals and that's why we are able to see the hello world written in our console. Let's see a more realistic container example this time by getting an alive container up:

docker run -d ubuntu:14.04 /bin/sh -c "while true; do echo hello world; sleep 1; done"

This example is the exact same example you can find on "A daemonized Hello world" section of Docker Hello World doc. The interesting stuff here is the -d switch which tells Docker to run the container and put it in the background. If we run docker ps now, we should see that the container is still at the running state:

Screenshot from 2015-07-13 06^%38^%03

We can attach to a running container's STDOUT/STDERR by running the below command based on the container ID:

docker attach ff2847155ced

Screenshot from 2015-07-13 06^%39^%43

You can detach from the container and leave it running with CTRL-p CTRL-q.

Also, you should have noticed that the first run command we have run took a while to get it up because it downloaded all the images from the registry. However, the second one was instantaneous as Ubuntu 14.04 was already there. So, we can understand from here that Docker images are immutable and composable which is great. You can look at the images you have under your host by running docker images command.

Screenshot from 2015-07-13 06^%46^%10

Development Environment

As mentioned before, Docker makes it super easy to get stuff in and try them out. For example, Redis is on Docker registry and I can just run it as another container:

docker run --name my-redis -d redis:3.0.2

Screenshot from 2015-07-13 06^%59^%06

Screenshot from 2015-07-13 06^%59^%34

Screenshot from 2015-07-13 07^%01^%33

I am seeing here that the TCP port 6379 is also exposed which is the port that Redis is exposed. However, I need to know the IP address of this host to connect to this Redis instance from the host. We can figure out the IP address of a running container through the inspect command:

docker inspect --format '{{ .NetworkSettings.IPAddress }}' dfaf0cf33467

Screenshot from 2015-07-13 07^%08^%10

Now, I can connect to this Redis instance with redis-cli tool I have installed in my host:

redis-cli -h 172.17.0.10 -p 6379

Screenshot from 2015-07-13 07^%13^%38

I can play with this as long as I want and I can stop the container whenever I don't need it anymore. I can follow this process for nearly everything (e.g. Ruby, GoLang, Elasticsearch, MongoDB, RabbitMQ, you-name-your-thing, etc.). For example, get yourself a python development environment, you can run the following docker run command:

docker run -t -i python:2.7.10 /bin/bash

This will get you an interactive container with Python installed in it and you can do anything you want with it:

Screenshot from 2015-07-13 07^%37^%25

When you’re done you can use the exit command or enter Ctrl-D to finish your interactive session inside the container and effectively stop the container. This container is still in your easy reach. You can start it again by obtaining its ID by running docker ps -a and running the start command with the container ID.

You might wonder how tools like Redis, MongoDB and Elasticsearch fit into this world as they need to persist data on disk but Docker containers in nature are created and torn down without any worry. This is a well-thought problem and Docker has a solution for this problem with Data Volumes.

Your Application

This is all great and shiny but where does our application fit into this? There are some many different approaches that you can take with Docker for your application but let me show you a straight from and a powerful approach which will possibly give you an idea.

For an application example, I have chosen to dockerize Octopus Deploy Library web application which works on top of Node.js. The way to achieve this is through the Dockerfile. It has a well written documentation and here is how the Dockerfile for Octopus Deploy library would look like:

FROM node:0.12.7

RUN ["npm", "install", "gulp", "-g"]
RUN ["npm", "install", "bower", "-g"]
RUN ["npm", "install", "grunt-cli", "-g"]

COPY . /app
WORKDIR /app

RUN ["npm", "install"]
RUN ["bower", "--allow-root", "install"]

EXPOSE 4000

ENTRYPOINT ["gulp"]

In my opinion, self-descriptiveness is the the best part of this file. We are defining here that the application image should be based on node:0.12.7 image, which has node.js stuff in it. Then, we run a few npm commands to install what we need. Later, we copy our stuff and change the working directory. Lastly, we install dependencies, expose the TCP port 4000 and specify the entry point command.

Octopus Deploy does its magic and gets the server up when you run gulp default task. That's why it's our entry point here.

We can now build our application image:

docker build -t octopus-library .

Screenshot from 2015-07-13 08^%33^%50

This kicks off the build and creates the image if everything goes well. Lastly, we can get the container up and running under our host using the same run command:

docker run -t -d -p 4040:4000 octopus-library

We use -p option here to map the internal 4000 TCP port to 4040 TCP port of the host. With this way, you can access the running application from the host through 4040 TCP port:

Screenshot from 2015-07-13 08^%50^%02

You can repeat the same steps by cloning my fork of Octopus Library and switching to docker branch.

Resources

Tags