Setting up a MongoDB Replica Set with Docker and Connecting to It With a .NET Core App
Easily setting up realistic non-production (e.g. dev, test, QA, etc.) environments is really critical in order to reduce the feedback loop. In this blog post, I want to talk about how you can achieve this if your application relies on MongoDB Replica Set by showing you how to set it up with Docker for non-production environments.
Hold on! I want to watch, not read!
I got you covered there! I have also recorded a ~5m covering the content of this blog post, where I also walks you through the steps visually. If you find this option useful, let me know through the comments below and I can aim harder to repeat that :)
What are we trying to do here and why?
If you have an application which works against a MongoDB database, it’s very common to have a replica set in production. This approach ensures the high availability of the data, especially for read scenarios. However, applications mostly end up working against a single MongoDB instance, because setting up a Replica Set in isolation is a tedious process. As mentioned at the beginning of the post, we want to reflect the production environment to the process of developing or testing the software applications as much as possible. The reason for that is to catch unexpected behaviour which may only occur under a production environment. This approach is valuable because it would allow us to reduce the feedback loop on those exceptional cases.
Docker makes this all easy!
This is where Docker enters into the picture! Docker is containerization technology and it allows us to have repeatable process to provision environments in a declarative way. It also gives us a try and tear down model where we can experiment and easily start again from the initial state. Docker can also help us with easily setting up a MongoDB Replica Set. Within our Docker Host, we can create Docker Network which would give us the isolated DNS resolution across containers. Then we can start creating the MongoDB docker containers. They would initially be unaware of each other. However, we can initialise the replication by connecting to one of the containers and running the replica set initialisation command. Finally, we can deploy our application container under the same docker network.
There are a handful of advantages to setting up this with Docker and I want to specifically touch on some of them:
- It can be automated easily. This is especially crucial for test environments which are provisioned on demand.
- It’s repeatable! The declarative nature of the Dockerfile makes it possible to end up with the same environment setup even if you run the scripts months later after your initial setup.
- Familiarity! Docker is a widely known and used tool for lots of other purposes and familiarity to the tool is high. Of course, this may depend on your development environment
Let’s make it work!
First of all, I need to create a docker network. I can achieve this by running the "docker network create” command and giving it a unique name.
docker network create my-mongo-cluster
The next step is to create the MongoDB docker containers and start them. I can use “docker run” command for this. Also, MongoDB has an official image on Docker Hub. So, I can reuse that to simplify the acqusition of MongoDB. For convenience, I will name the container with a number suffix. The container also needs to be tied to the network we have previously created. Finally, I need to specify the name of the replica set for each container.
docker run --name mongo-node1 -d --net my-mongo-cluster mongo --replSet “rs0"
First container is created and I need to run the same command to create two more MongoDB containers. The only difference is with the container names.
docker run --name mongo-node2 -d --net my-mongo-cluster mongo --replSet "rs0" docker run --name mongo-node3 -d --net my-mongo-cluster mongo --replSet “rs0"
I can see that all of my MongoDB containers are at the running state by executing the “docker ps” command.
In order to form a replica set, I need to initialise the replication. I will do that by connecting to one of the containers through the “docker exec” command and starting the mongo shell client.
docker exec -it mongo-node1 mongo
As I now have a connection to the server, I can initialise the replication. This requires me to declare a config object which will include connection details of all the servers.
config = { "_id" : "rs0", "members" : [ { "_id" : 0, "host" : "mongo-node1:27017" }, { "_id" : 1, "host" : "mongo-node2:27017" }, { "_id" : 2, "host" : "mongo-node3:27017" } ] }
Finally, we can run “rs.initialize" command to complete the set up.
You will notice that the server I am connected to will be elected as the primary in the replica set shortly. By running “rs.status()”, I can view the status of other MongoDB servers within the replica set. We can see that there are two secondaries and one primary in the replica set.
.NET Core Application
As a scenario, I want to run my .NET Core application which writes data to a MongoDB database and start reading it in a loop. This application will be connecting to the MongoDB replica set which we have just created. This is a standard .NET Core console application which you can create by running the following script:
dotnet new console
The csproj file for this application looks like below.
<Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <OutputType>Exe</OutputType> <TargetFramework>netcoreapp2.0</TargetFramework> </PropertyGroup> <ItemGroup> <PackageReference Include="Bogus" Version="18.0.2" /> <PackageReference Include="MongoDB.Driver" Version="2.4.4" /> <PackageReference Include="Polly" Version="5.3.1" /> </ItemGroup> </Project>
Notice that I have two interesting dependencies there. Polly is used to retry the read calls to MongoDB based on defined policies. This bit is interesting as I would expect the MongoDB client to handle that for read calls. However, it might be also a good way of explicitly stating which calls can be retried inside your application. Bogus, on the other hand, is just here to be able to create fake names to make the application a bit more realistic :)
Finally, this is the code to make this application work:
partial class Program { static void Main(string[] args) { var settings = new MongoClientSettings { Servers = new[] { new MongoServerAddress("mongo-node1", 27017), new MongoServerAddress("mongo-node2", 27017), new MongoServerAddress("mongo-node3", 27017) }, ConnectionMode = ConnectionMode.ReplicaSet, ReplicaSetName = "rs0" }; var client = new MongoClient(settings); var database = client.GetDatabase("mydatabase"); var collection = database.GetCollection<User>("users"); System.Console.WriteLine("Cluster Id: {0}", client.Cluster.ClusterId); client.Cluster.DescriptionChanged += (object sender, ClusterDescriptionChangedEventArgs foo) => { System.Console.WriteLine("New Cluster Id: {0}", foo.NewClusterDescription.ClusterId); }; for (int i = 0; i < 100; i++) { var user = new User { Id = ObjectId.GenerateNewId(), Name = new Bogus.Faker().Name.FullName() }; collection.InsertOne(user); } while (true) { var randomUser = collection.GetRandom(); Console.WriteLine(randomUser.Name); Thread.Sleep(500); } } }
This is not the most beautiful and optimized code ever but should demonstrate what we are trying to achieve by having a replica set. It's actually the GetRandom method on the MongoDB collection object which handles the retry:
public static class CollectionExtensions { private readonly static Random random = new Random(); public static T GetRandom<T>(this IMongoCollection<T> collection) { var retryPolicy = Policy .Handle<MongoCommandException>() .Or<MongoConnectionException>() .WaitAndRetry(2, retryAttempt => TimeSpan.FromSeconds(Math.Pow(2, retryAttempt)) ); return retryPolicy.Execute(() => GetRandomImpl(collection)); } private static T GetRandomImpl<T>(this IMongoCollection<T> collection) { return collection.Find(FilterDefinition<T>.Empty) .Limit(-1) .Skip(random.Next(99)) .First(); } }
I will run this through docker as well and here is the dockerfile for this:
FROM microsoft/dotnet:2-sdk COPY ./mongodb-replica-set.csproj /app/ WORKDIR /app/ RUN dotnet --info RUN dotnet restore ADD ./ /app/ RUN dotnet publish -c DEBUG -o out ENTRYPOINT ["dotnet", "out/mongodb-replica-set.dll"]
When it starts, we can see that it will output the result to the console:
Prove that It Works!
In order to demonstrate the effect of the replica set, I want to take down the primary node. First of all, we need to have look at the output of rs.status command we have previously ran in order to identify the primary node. We can see that it’s node1!
Secondly, we need to get the container id for that node.
Finally, we can kill the container by running the “docker stop command”. Once the container is stopped, you will notice that application will gracefully recover and continue reading the data.
ASP.NET Core Authentication in a Load Balanced Environment with HAProxy and Redis
Token based authentication is a fairly common way of authenticating a user for an HTTP application. ASP.NET and its frameworks had support for implementing this out of the box without much effort with different type of authentication approaches such as cookie based authentication, bearer token authentication, etc. ASP.NET Core is a no exception to this and it got even better (which we will see in a while).
However, handling this in a load balanced environment has always involved extra caring as all of the nodes should be able to read the valid authentication token even if that token has been written by another node. Old-school ASP.NET solution to this is to keep the Machine Key in sync with all the nodes. Machine key, for those who are not familiar with it, is used to encrypt and decrypt the authentication tokens under ASP.NET and each machine by default has its own unique one. However, you can override this and put your own one in place per application through a setting inside the Web.config file. This approach had its own problems and with ASP.NET Core, all data protection APIs have been revamped which cleared a room for big improvements in this area such as key expiration and rolling, key encryption at rest, etc. One of those improvements is the ability to store keys in different storage systems, which is what I am going to touch on in this post.
The Problem
Imagine a case where we have an ASP.NET Core application which uses cookie based authentication and stores their user data in MongoDB, which has been implemented using ASP.NET Core Identity and its MongoDB provider.
This setup is all fine and our application should function perfectly. However, if we put this application behind HAProxy and scale it up to two nodes, we will start seeing problems like below:
System.Security.Cryptography.CryptographicException: The key {3470d9c3-e59d-4cd8-8668-56ba709e759d} was not found in the key ring. at Microsoft.AspNetCore.DataProtection.KeyManagement.KeyRingBasedDataProtector.UnprotectCore(Byte[] protectedData, Boolean allowOperationsOnRevokedKeys, UnprotectStatus& status) at Microsoft.AspNetCore.DataProtection.KeyManagement.KeyRingBasedDataProtector.DangerousUnprotect(Byte[] protectedData, Boolean ignoreRevocationErrors, Boolean& requiresMigration, Boolean& wasRevoked) at Microsoft.AspNetCore.DataProtection.KeyManagement.KeyRingBasedDataProtector.Unprotect(Byte[] protectedData) at Microsoft.AspNetCore.Antiforgery.Internal.DefaultAntiforgeryTokenSerializer.Deserialize(String serializedToken)
Let’s look at the below diagram to understand why we are having this problem:
By default, ASP.NET Core Data Protection is wired up to store its keys under the file system. If you have your application running under multiple nodes as shown in above diagram, each node will have its own keys to protect and unprotect the sensitive information like authentication cookie data. As you can guess, this behaviour is problematic with the above structure since one node cannot read the protected data which the other node protected.
The Solution
As I mentioned before, one of the extensibility points of ASP.NET Core Data Protection stack is the storage of the data protection keys. This place can be a central place where all the nodes of our web application can reach out to. Let’s look at the below diagram to understand what we mean by this:
Here, we have Redis as our Data Protection key storage. Redis is a good choice here as it’s a well-suited for key-value storage and that’s what we need. With this setup, it will be possible for both nodes of our application to read protected data regardless of which node has written it.
Wiring up Redis Data Protection Key Storage
With ASP.NET Core 1.0.0, we had to write the implementation by ourselves to make ASP.NET Core to store Data Protection keys on Redis but with 1.1.0 release, the team has simultaneously shipped a NuGet package which makes it really easy to wire this up: Microsoft.AspNetCore.DataProtection.Redis. This package easily allows us to swap the data protection storage destination to be Redis. We can do this while we are configuring services as part of ConfigureServices:
public void ConfigureServices(IServiceCollection services) { // sad but a giant hack :( // https://github.com/StackExchange/StackExchange.Redis/issues/410#issuecomment-220829614 var redisHost = Configuration.GetValue<string>("Redis:Host"); var redisPort = Configuration.GetValue<int>("Redis:Port"); var redisIpAddress = Dns.GetHostEntryAsync(redisHost).Result.AddressList.Last(); var redis = ConnectionMultiplexer.Connect($"{redisIpAddress}:{redisPort}"); services.AddDataProtection().PersistKeysToRedis(redis, "DataProtection-Keys"); services.AddOptions(); // ... }
I have wired it up exactly like this in my sample application in order to show you a working example. It’s an example taken from ASP.NET Identity repository but slightly changed to make it work with MongoDB Identity store provider.
Note here that configuration values above are specific to my implementation and it doesn’t have to be that way. See these lines inside my Docker Compose file and these inside my Startup class to understand how it’s being passed and hooked up.
The sample application can be run on Docker through Docker Compose and it will get a few things up and running:
- Two nodes of the application
- A MongoDB instance
- A Redis instance
You can see my docker-compose.yml file to understand how I hooked things together:
mongo: build: . dockerfile: mongo.dockerfile container_name: haproxy_redis_auth_mongodb ports: - "27017:27017" redis: build: . dockerfile: redis.dockerfile container_name: haproxy_redis_auth_redis ports: - "6379:6379" webapp1: build: . dockerfile: app.dockerfile container_name: haproxy_redis_auth_webapp1 environment: - ASPNETCORE_ENVIRONMENT=Development - ASPNETCORE_server.urls=http://0.0.0.0:6000 - WebApp_MongoDb__ConnectionString=mongodb://mongo:27017 - WebApp_Redis__Host=redis - WebApp_Redis__Port=6379 links: - mongo - redis webapp2: build: . dockerfile: app.dockerfile container_name: haproxy_redis_auth_webapp2 environment: - ASPNETCORE_ENVIRONMENT=Development - ASPNETCORE_server.urls=http://0.0.0.0:6000 - WebApp_MongoDb__ConnectionString=mongodb://mongo:27017 - WebApp_Redis__Host=redis - WebApp_Redis__Port=6379 links: - mongo - redis app_lb: build: . dockerfile: haproxy.dockerfile container_name: app_lb ports: - "5000:80" links: - webapp1 - webapp2
HAProxy is also configured to balance the load between two application nodes as you can see inside the haproxy.cfg file, which we copy under the relevant path inside our dockerfile:
global log 127.0.0.1 local0 log 127.0.0.1 local1 notice defaults log global mode http option httplog option dontlognull timeout connect 5000 timeout client 10000 timeout server 10000 frontend balancer bind 0.0.0.0:80 mode http default_backend app_nodes backend app_nodes mode http balance roundrobin option forwardfor http-request set-header X-Forwarded-Port %[dst_port] http-request set-header Connection keep-alive http-request add-header X-Forwarded-Proto https if { ssl_fc } option httpchk GET / HTTP/1.1\r\nHost:localhost server webapp1 webapp1:6000 check server webapp2 webapp2:6000 check
All of these are some details on how I wired up the sample to work. If we now look closely at the header of the web page, you should see the server name written inside the parenthesis. If you refresh enough, you will see that part alternating between two server names:
This confirms that our load is balanced between the two application nodes. The rest of the demo is actually very boring. It should just work as you expect it to work. Go to “Register” page and register for an account, log out and log back in. All of those interactions should just work. If we look inside the Redis instance, we should also see that Data Protection key has been written there:
docker run -it --link haproxy_redis_auth_redis:redis --rm redis redis-cli -h redis -p 6379 LRANGE DataProtection-Keys 0 10
Conclusion and Going Further
I believe that I was able to show you what you need to care about in terms of authentication when you scale our your application nodes to multiple servers. However, do not take my sample as is and apply to your production application :) There are a few important things that suck on my sample, like the fact that my application nodes talk to Redis in an unencrypted fashion. You may want to consider exposing Redis over a proxy which supports encryption.
The other important bit with my implementation is that all of the nodes of my application act as Data Protection key generators. Even if I haven’t seen much problems with this in practice so far, you may want to restrict only one node to be responsible for key generation. You can achieve this by calling DisableAutomaticKeyGeneration like below during the configuration stage on your secondary nodes:
public void ConfigureServices(IServiceCollection services) { services.AddDataProtection().DisableAutomaticKeyGeneration(); }
I would suggest determining whether a node is primary or not through a configuration value so that you can override this through an environment variable for example.
Moving to ASP.NET Core RC2: Tooling
.NET Core Runtime RC2 has been released a few days ago along with .NET Core SDK Preview 1. At the same time of .NET Core release, ASP.NET Core RC2 has also been released. Today, I started doing the transition from RC1 to RC2 and I wanted to write about how I am getting each stage done. Hopefully, it will be somewhat useful to you as well. In this post, I want to talk about the tooling aspect of the transition.
Get the dotnet CLI Ready
One of the biggest shift from RC1 and RC2 is the tooling. Before, we had DNVM, DNX and DNU as command line tools. All of them are now gone (RIP). Instead, we have one command line tool: dotnet CLI. First, I installed dotnet CLI on my Ubuntu 14.04 VM by running the following script as explained here:
sudo sh -c 'echo "deb [arch=amd64] https://apt-mo.trafficmanager.net/repos/dotnet/ trusty main" > /etc/apt/sources.list.d/dotnetdev.list' sudo apt-key adv --keyserver apt-mo.trafficmanager.net --recv-keys 417A0893 sudo apt-get update sudo apt-get install dotnet-dev-1.0.0-preview1-002702
This installed dotnet-dev-1.0.0-preview1-002702 package on my machine and I am off to go:
You can also use apt-cache to see all available versions:
Just to make things clear, I deleted ~/.dnx folder entirely to get rid of all RC1 garbage.
Get Visual Studio Code Ready
At this stage, I had the C# extension installed on my VS Code instance on my Ubuntu VM which was only working for DNX based projects. So, I opened up VS Code and updated my C# extension to latest version (which is 1.0.11). After the upgrade, I opened up a project which was on RC1 and VS Code started downloading .NET Core Debugger.
That was a good experience, I didn't dig into how to do that but I am not sure at this point why it didn't come inside the extension itself. There is probably a reason for that but not my priority to dig into right now :)
Try out the Setup
Now, I am ready to blast off with .NET Core. I used dotnet CLI to create a sample project by typing "dotnet new --type=console" and opened up project folder with VS Code. As soon as VS Code is launched, it asked me to set the stage.
Which got me a few new files under .vscode folder.
I jumped into the debug pane, selected the correct option and hit the play button after putting a breakpoint inside the Program.cs file.
Boom! I am in business.
Now, I am moving to code changes which will involve more hair pulling I suppose.
Resources
Microsoft Build 2016 in a Nutshell
Two weeks ago, I had an amazing opportunity to be at Microsoft Build Conference in San Francisco as an attendee thanks to my amazing company Redgate. The experience was truly unique and amount of people I have met there was huge. A bit late but I would like to share my experience about the conference with you in this post by highlighting what has happened and giving you my personal takeaways. You can also check out my tweets for the Build conference.
Announcements
There were bunch of big and small announcements throughout the conference from Microsoft. Some of these were highlighted during two keynotes and some other announcements were spread to three days. I tried to capture all of them here but it's very likely I missed some of them (mostly the small ones):
- Running Bash on Ubuntu on Windows has been announced. This is not something like Cygwin or anything. It basically runs Ubuntu on Windows without the Unix kernel. So, the things we all love like apt-get works as you expect it to work. This is a new developer feature included in a Windows 10 "Anniversary" update which is coming soon (as they say). You can watch Running Bash on Ubuntu on Windows video, check Windows Command Line Improvements session from Build 2016 or Scott Hanselman's Channel 9 interview on Linux Command Line on Windows.
- Xamarin in Visual Studio is now available at no extra cost. Xamarin will be in every edition of Visual Studio, including the widely-available Visual Studio Community Edition, which is free for individual developers, open source projects, academic research, education, and small professional teams. You can also read Mobile App Development made easy with Visual Studio and Xamarin post up on VS blog.
- There is now an iPhone simulator on Windows which comes with Xamarin. This still requires a Mac to build the project AFAIK.
- Mono Project is now part of the .NET Foundation and is now under MIT License.
- Visual Studio 2015 Update 2 RTM has been made available to public. You can read more about this on VS Blog.
- Visual Studio "15" Preview has been made available to public. This is different than VS 2015 and has a very straight forward install experience. I recommend watching The Future of Visual Studio session or The Future of Visual Studio Channel 9 live interview from Build 2016 or read Visual Studio “15” Preview Now Available blog post if you want to learn more.
- Related to both VS 2015 Update 2 and VS 15 Preview, see What’s New for C# and VB in Visual Studio post on .NET Blog. You will see mostly features that ReSharper had for long time now.
- Microsoft announced its Bot Framework.
- The HoloLens SDK and emulator is now live.
- Cognitive Services has been announced. This service is exposing several machine learning algorithms as a service and allows you to hook them into your apps through several HTTP based APIs like Computer Vision API. Some portions of this service was known as Project Oxford which was the codename. Try out https://www.captionbot.ai/ which is built on top of these APIs.
- Azure Service Fabric has stepped outside the preview and is now GA. Microsoft has given a lot of attention to this service throughout the conference, including the keynote (starting from 0:52:42). You can also check out Azure Service Fabric for Developers and Building MicroServices with Service Fabric sessions from Build 2016 for more deep info on this.
- Virtual Machine Scale Sets is now GA.
- Azure released a new service called Azure Functions. Azure Functions introduces an event driven, compute-on-demand experience that builds on Azure’s market leading PaaS platform. This is Azure's offering which is very similar to AWS Lambda. You can read Introducing Azure Functions blog post and Introducing Azure Functions session from Build 2016 for more info.
- Several announcements has been made for DocumentDB, Microsoft’s hosted NoSQL offering on Azure.
-
- Protocol support for MongoDB: Applications can now easily and transparently communicate with DocumentDB using existing, community supported Apache MongoDB APIs and drivers. The DocumentDB protocol support for MongoDB is now in preview. This is a very strategic move to make up for the lack of standalone local installation option which is the most wanted feature for DocumentDB.
- Global databases: DocumentDB global databases are now available in preview, visit the sign-up page for more information.
- Updated pricing and scale options.
- Azure IoT Starter Kits have been released. There were many other stuff on IoT part. You can see the official post on Azure Blog for them.
- Power BI Embedded has been made available as a preview service.
- Announcing the publication of Parse Server with Azure Managed Services
- Announcing the .NET Framework 4.6.2 Preview
- Windows 10 Anniversary SDK is bringing exciting opportunities to developers
Sessions
Here is the list of sessions I have attended:
- First Day Keynote
- The Future of Visual Studio: It was a great session to watch even if there were demo failures, which were understandable considering the state of the product. Debugging on Ubuntu through VS Preview 15 was my favorite of the session.
- What's New in TypeScript?: It's always interesting and useful to watch Andres talking about languages. On this session, the part where he talk about non-nullable support on TypeScript was really beneficial.
- Building Resilient Services: Learning Lessons from Azure with Mark Russinovich: This is like a tradition for build. Mark Rus. talked about architectural and design decisions on Azure platform, mostly driven from real use cases.
- Second Day Keynote
- .NET Overview: Very valuable session to understand the future of .NET.
- Introducing ASP.NET Core 1.0: Very basic intro on ASP.NET Core 1.0. It is useful to watch if you have no idea what is coming up on ASP.NET Core 1.0. Check the docs, too.
- Enhancing Your Application with Machine Learning Through APIs
- Visual Studio 2015 Extensibility: Mads Kristensen has built a VS extension, properly open sourced it and published it on the stage: https://github.com/madskristensen/FileDiffer
- Azure Data Lake and Azure Data Warehouse: Applying Modern Practices to Your App: This was a really vague, hard-to-follow session but I plan to watch it again. I find messaging on Data Lake a bit confusing.
- Deploying ASP.NET Core Applications: Enjoyed this a lot, Dan laid out several cases in terms of deployment of ASP.NET Core applications.
As much as I wanted to attend some other sessions, I missed some of them mostly due to clashes with other sessions. Luckily recordings for all Build 2016 sessions are available up on Channel 9. Here is my list of sessions to catch up:
- The Future of C#: This was really entertaining and informative to watch and see what the future of C# might bring.
- Building Apps in Azure Using Windows and Linux VMs: Really good overall picture on the current state of Azure IaaS
- Cross-Platform Mobile with Xamarin
- Delivering Applications at Scale with DocumentDB, Azure's NoSQL Document Database
- Microsoft Cognitive Services: Build smarter and more engaging experiences
- Windows 10 IoT Core: From Maker to Market
- Setting the Stage: The Application Platform in Windows Server 2016
- Introducing Azure Functions
- ASP.NET Core Deep Dive into MVC
- Advanced Analytics with R and SQL
- Using the Right Networking API for Your UWP App
- SQL Database Technologies for Developers
- Building Applications Using the Azure Container Service
There were also many good Channel 9 Live interviews. You can find them here. Here is a personal list of a few which are worth listening to:
- Anders Live
- Jeffrey Snover on Azure Stack, Powershell, and Nano Server
- The Future of .NET Languages
- Linux Command Line on Windows
- The Future of Visual Studio
- Moving Forward with ASP.NET
- Docker Tooling for Visual Studio and ASP.NET Core
- Xamarin Live
- Mark Russinovich Live
- Data Science on Azure
- International Space Station Project
Personal Takeaways
All in all it has been a great conference and as stated, I am still catching up on the stuff that I have missed. Throughout the conference, I have picked up a few key points and I want to end the post with those:
- I have seen more from Microsoft to make developers lives easier and more productive by enabling new tools (Bash on Ubuntu on Windows), supporting multiple platforms (Service Fabric to run on every environment including AWS, on-premises, Azure Stack and preview of Service Fabric on Linux), open sourcing more (some parts of Xamarin have gone open source) and making existing paid tools available for free (Xamarin is now free).
- Microsoft is more focused on getting their existing services together and trying to give a cohesive ecosystem for developers. Service Fabric, Cognitive Services, Data Lake is a few examples of this.
- .NET Core and CoreCLR is approaching to finalization for v1. After RC2, I don't suppose there will be much more features added or concepts changing.
- I think this is the first time I have seen stabilization on client Apps story for Microsoft. Universal Windows Platform (UWP) was the focus on this area this year and it was the same on previous year.
- I am absolutely happy to see Microsoft abandoning Windows Phone day by day. There was no direct sessions on it during the conference.
- There were more steps towards making software to manage people's lives in a better way. Skype Bot Framework was one of these steps.
- Microsoft (mostly Azure group) invests on IoT solutions heavily. Azure Functions and new updates on Azure IoT Suite are just a few signs of this.
- Azure Resource Manager (ARM) and ARM templates are getting a lot of love from Microsoft and it's the way they push forward. They even build new services on Azure on top of this.
Having a Look at dotnet CLI Tool and .NET Native Compilation in Linux
I have been following ASP.NET 5 development from the very start and it has been an amazing experience so far. This new platform has seen so many changes both on libraries and concepts throughout but the biggest of all is about to come. The new command line tools that ASP.NET 5 brought to us like dnx and dnu will vanish soon. However, this doesn’t mean that we won’t have a command line first experience. Concepts of these tools will be carried over by a new command line tool: dotnet CLI.
Note that dotnet CLI is not even a beta yet. It’s very natural that some of the stuff that I show below may change or even be removed. So, be cautious.
Scott Hanselman gave an amazing introduction to this tool in his blog post. As indicated in that post, new dotnet CLI tool will give a very similar experience to us compared to other platforms like Go, Ruby, Python. This is very important because, again, this will remove another entry barrier for the newcomers.
You can think of this new CLI tool as combination of following three in terms of concepts:
- csc.exe
- msbuild.exe
- nuget.exe
Of course, this is an understatement but it will help you get a grasp of what that tools can do. One other important aspect of the tool is to be able to bootstrap your code and execute it. Here is how:
In order to install dotnet CLI tool into my Ubuntu machine, I just followed the steps on the Getting Started guide for Ubuntu.
Step one is to create a project structure. My project has two files under "hello-dotnet" folder. Program.cs:
using System; namespace ConsoleApplication { public class Program { public static void Main(string[] args) { Console.WriteLine("Hello World!"); } } }
project.json:
{ "version": "1.0.0-*", "compilationOptions": { "emitEntryPoint": true }, "dependencies": { "Microsoft.NETCore.Runtime": "1.0.1-beta-*", "System.IO": "4.0.11-beta-*", "System.Console": "4.0.0-beta-*", "System.Runtime": "4.0.21-beta-*" }, "frameworks": { "dnxcore50": { } } }
These are the bare essentials that I need to get something outputted to my console window. One important piece here is the emitEntryPoint bit inside the project.json file which indicates that the module will have an entry point which is the static Main method by default.
The second step here is to restore the defined dependencies. This can be done through the "dotnet restore" command:
Finally, we can now execute the code that we have written and see that we can actually output some text to console. At the same path, just run "dotnet run" command to do that:
Very straight forward experience! Let’s just try to compile the code through "dotnet compile" command:
Notice the "hello-dotnet" file there. You can think of this file as dnx which can just run your app. It’s basically the bootstrapper just for your application.
So, we understand that we can just run this thing:
Very nice! However, that’s not all! This is still a .NET application which requires a few things to be in place to be executed. What we can also do here is to compile native, standalone executables (just like you can do with Go).
Do you see the "--native" switch? That will allow you to compile the native executable binary which will be specific to the acrhitecture that you are compiling on (in my case, it’s Ubuntu 14.04):
"hello-dotnet" file here can be executed same as the previous one but this time, it’s all machine code and everything is embedded (yes, even the .NET runtime). So, it’s very usual that you will see a significant increase in the size:
This is a promising start and amazing to see that we have a unified tool to rule them all (famous last words). The name of the tool is also great, it makes it straight forward to understand based on your experiences with other platforms and seeing this type of command line first architecture adopted outside of ASP.NET is also great and will bring consistency throughout the ecosystem. I will be watching this space as I am sure there will be more to come :)