Sorted By: Tag (http)

ASP.NET Core Authentication in a Load Balanced Environment with HAProxy and Redis

Token based authentication is a fairly common way of authenticating a user for an HTTP application. However, handling this in a load balanced environment has always involved extra caring. In this post, I will show you how this is handled in ASP.NET Core by demonstrating it with HAProxy and Redis through the help of Docker.
2016-11-28 23:31
Tugberk Ugurlu


Token based authentication is a fairly common way of authenticating a user for an HTTP application. ASP.NET and its frameworks had support for implementing this out of the box without much effort with different type of authentication approaches such as cookie based authentication, bearer token authentication, etc. ASP.NET Core is a no exception to this and it got even better (which we will see in a while).

However, handling this in a load balanced environment has always involved extra caring as all of the nodes should be able to read the valid authentication token even if that token has been written by another node. Old-school ASP.NET solution to this is to keep the Machine Key in sync with all the nodes. Machine key, for those who are not familiar with it, is used to encrypt and decrypt the authentication tokens under ASP.NET and each machine by default has its own unique one. However, you can override this and put your own one in place per application through a setting inside the Web.config file. This approach had its own problems and with ASP.NET Core, all data protection APIs have been revamped which cleared a room for big improvements in this area such as key expiration and rolling, key encryption at rest, etc. One of those improvements is the ability to store keys in different storage systems, which is what I am going to touch on in this post.

The Problem

Imagine a case where we have an ASP.NET Core application which uses cookie based authentication and stores their user data in MongoDB, which has been implemented using ASP.NET Core Identity and its MongoDB provider.

1-ok-wo-lb

This setup is all fine and our application should function perfectly. However, if we put this application behind HAProxy and scale it up to two nodes, we will start seeing problems like below:

System.Security.Cryptography.CryptographicException: The key {3470d9c3-e59d-4cd8-8668-56ba709e759d} was not found in the key ring.
   at Microsoft.AspNetCore.DataProtection.KeyManagement.KeyRingBasedDataProtector.UnprotectCore(Byte[] protectedData, Boolean allowOperationsOnRevokedKeys, UnprotectStatus& status)
   at Microsoft.AspNetCore.DataProtection.KeyManagement.KeyRingBasedDataProtector.DangerousUnprotect(Byte[] protectedData, Boolean ignoreRevocationErrors, Boolean& requiresMigration, Boolean& wasRevoked)
   at Microsoft.AspNetCore.DataProtection.KeyManagement.KeyRingBasedDataProtector.Unprotect(Byte[] protectedData)
   at Microsoft.AspNetCore.Antiforgery.Internal.DefaultAntiforgeryTokenSerializer.Deserialize(String serializedToken)

Let’s look at the below diagram to understand why we are having this problem:

2-not-ok-w-lb

By default, ASP.NET Core Data Protection is wired up to store its keys under the file system. If you have your application running under multiple nodes as shown in above diagram, each node will have its own keys to protect and unprotect the sensitive information like authentication cookie data. As you can guess, this behaviour is problematic with the above structure since one node cannot read the protected data which the other node protected.

The Solution

As I mentioned before, one of the extensibility points of ASP.NET Core Data Protection stack is the storage of the data protection keys. This place can be a central place where all the nodes of our web application can reach out to. Let’s look at the below diagram to understand what we mean by this:

4-ok-w-lb

Here, we have Redis as our Data Protection key storage. Redis is a good choice here as it’s a well-suited for key-value storage and that’s what we need. With this setup, it will be possible for both nodes of our application to read protected data regardless of which node has written it.

Wiring up Redis Data Protection Key Storage

With ASP.NET Core 1.0.0, we had to write the implementation by ourselves to make ASP.NET Core to store Data Protection keys on Redis but with 1.1.0 release, the team has simultaneously shipped a NuGet package which makes it really easy to wire this up: Microsoft.AspNetCore.DataProtection.Redis. This package easily allows us to swap the data protection storage destination to be Redis. We can do this while we are configuring services as part of ConfigureServices:

public void ConfigureServices(IServiceCollection services)
{
    // sad but a giant hack :(
    // https://github.com/StackExchange/StackExchange.Redis/issues/410#issuecomment-220829614
    var redisHost = Configuration.GetValue<string>("Redis:Host");
    var redisPort = Configuration.GetValue<int>("Redis:Port");
    var redisIpAddress = Dns.GetHostEntryAsync(redisHost).Result.AddressList.Last();
    var redis = ConnectionMultiplexer.Connect($"{redisIpAddress}:{redisPort}");

    services.AddDataProtection().PersistKeysToRedis(redis, "DataProtection-Keys");
    services.AddOptions();

    // ...
}

I have wired it up exactly like this in my sample application in order to show you a working example. It’s an example taken from ASP.NET Identity repository but slightly changed to make it work with MongoDB Identity store provider.

Note here that configuration values above are specific to my implementation and it doesn’t have to be that way. See these lines inside my Docker Compose file and these inside my Startup class to understand how it’s being passed and hooked up.

The sample application can be run on Docker through Docker Compose and it will get a few things up and running:

  • Two nodes of the application
  • A MongoDB instance
  • A Redis instance

Image

You can see my docker-compose.yml file to understand how I hooked things together:

mongo:
    build: .
    dockerfile: mongo.dockerfile
    container_name: haproxy_redis_auth_mongodb
    ports:
      - "27017:27017"

redis:
    build: .
    dockerfile: redis.dockerfile
    container_name: haproxy_redis_auth_redis
    ports:
      - "6379:6379"

webapp1:
    build: .
    dockerfile: app.dockerfile
    container_name: haproxy_redis_auth_webapp1
    environment:
      - ASPNETCORE_ENVIRONMENT=Development
      - ASPNETCORE_server.urls=http://0.0.0.0:6000
      - WebApp_MongoDb__ConnectionString=mongodb://mongo:27017
      - WebApp_Redis__Host=redis
      - WebApp_Redis__Port=6379
    links:
      - mongo
      - redis

webapp2:
    build: .
    dockerfile: app.dockerfile
    container_name: haproxy_redis_auth_webapp2
    environment:
      - ASPNETCORE_ENVIRONMENT=Development
      - ASPNETCORE_server.urls=http://0.0.0.0:6000
      - WebApp_MongoDb__ConnectionString=mongodb://mongo:27017
      - WebApp_Redis__Host=redis
      - WebApp_Redis__Port=6379
    links:
      - mongo
      - redis

app_lb:
    build: .
    dockerfile: haproxy.dockerfile
    container_name: app_lb
    ports:
      - "5000:80"
    links:
      - webapp1
      - webapp2

HAProxy is also configured to balance the load between two application nodes as you can see inside the haproxy.cfg file, which we copy under the relevant path inside our dockerfile:

global
  log 127.0.0.1 local0
  log 127.0.0.1 local1 notice

defaults
  log global
  mode http
  option httplog
  option dontlognull
  timeout connect 5000
  timeout client 10000
  timeout server 10000

frontend balancer
  bind 0.0.0.0:80
  mode http
  default_backend app_nodes

backend app_nodes
  mode http
  balance roundrobin
  option forwardfor
  http-request set-header X-Forwarded-Port %[dst_port]
  http-request set-header Connection keep-alive
  http-request add-header X-Forwarded-Proto https if { ssl_fc }
  option httpchk GET / HTTP/1.1\r\nHost:localhost
  server webapp1 webapp1:6000 check
  server webapp2 webapp2:6000 check

All of these are some details on how I wired up the sample to work. If we now look closely at the header of the web page, you should see the server name written inside the parenthesis. If you refresh enough, you will see that part alternating between two server names:

Image

Image

This confirms that our load is balanced between the two application nodes. The rest of the demo is actually very boring. It should just work as you expect it to work. Go to “Register” page and register for an account, log out and log back in. All of those interactions should just work. If we look inside the Redis instance, we should also see that Data Protection key has been written there:

docker run -it --link haproxy_redis_auth_redis:redis --rm redis redis-cli -h redis -p 6379
LRANGE DataProtection-Keys 0 10

Image

Conclusion and Going Further

I believe that I was able to show you what you need to care about in terms of authentication when you scale our your application nodes to multiple servers. However, do not take my sample as is and apply to your production application :) There are a few important things that suck on my sample, like the fact that my application nodes talk to Redis in an unencrypted fashion. You may want to consider exposing Redis over a proxy which supports encryption.

The other important bit with my implementation is that all of the nodes of my application act as Data Protection key generators. Even if I haven’t seen much problems with this in practice so far, you may want to restrict only one node to be responsible for key generation. You can achieve this by calling DisableAutomaticKeyGeneration like below during the configuration stage on your secondary nodes:

public void ConfigureServices(IServiceCollection services)
{
     services.AddDataProtection().DisableAutomaticKeyGeneration();
}

I would suggest determining whether a node is primary or not through a configuration value so that you can override this through an environment variable for example.

Long-Running Asynchronous Operations, Displaying Their Events and Progress on Clients

I want to share a few thoughts that I have been keeping to myself on showing progress for long-running asyncronous operations on a system where individual events can be sent during ongoing operations.
2016-07-30 16:33
Tugberk Ugurlu


With today's modern applications, we end up with lots of asynchronous operations happening and we somehow have to let the user know what is happening. This, of course, depends on the requirements and business needs.

Let's look at the Amazon case. When you buy something on Amazon, Amazon tells you that your order has been taken. However, it doesn't really mean that you have actually purchased it as the process of taking the payment has not been kicked off yet. The payment process probably goes through a few stages and at the end, it has a few outcome options like success, faılure, etc. Amazon only gives the user the change to know about the outcome of this process which is done through e-mail. This is pretty straightforward to implement (famous last words?) as it doesn't require things like real-time process notifications, etc..

On the other hand, creating a VM on Microsoft Azure has different needs. When you kick the VM creation operation through an Azure client like Azure Portal, it can give you real-time notifications on the operation status to show you in what stage the operation is (e.g. provisioning, starting up, etc.).

In this post, I will look at the later option and try to make a few points to prove that the implementation is a bit tricky. However, we will see a potential solution at the end and that will hopefully be helpful to you as well.

Problematic Flow

Let's look at a problematic flow:

  • User starts an operation by sending a POST/PUT/PATCH request to an HTTP endpoint.
  • Server sends the 202 (Accepted) response and includes a some sort of operation identifier on the response body.
  • Client subscribes to a notification channel based on this operation identifier.
  • Client starts receiving events on that notification channel and it decides on how to display them on the user interface (UI).
  • There are special event types which give an indication that the operation has ended (e.g. completed, failed, abandoned, etc.).
    • The client needs to know about these event types and handle them specially.
    • Whenever the client receives an event which corresponds to operation end event, it should unsubscribe from the notification channel for that specific operation.

We have a few problems with the above flow:

  1. Between the operation start and the client subscribing to a notification channel, there could be events that potentially could happen and the client is going to miss those.
  2. If the user disconnects from the notification channel for a while (for various reasons), the client will miss the events that could happen during that disconnect period.
  3. Similar to above situation, the user might entirely shut its client and opens it up again at some point later to see the progress. In that case, we lost the events that has happened before subscribing to the notification channel on the new session.

Possible Solution

The solution to all of the above problems boils down to persisting the events and opening up an HTTP endpoint to pull all the past events per each operation. Let's assume that we are performing payment transactions and want to give transparent progress on this process. Our events endpoint would look similar to below:

GET /payments/eefcb363/events

[
     {
          "id": "c12c432c",
          "type": "ProcessingStarted",
          "message": "The payment 'eefcb363' has been started for processing by worker 'h8723h7d'.",
          "happenedAt": "2016-07-30T11:05:26.222Z"
          "details": {
               "paymentId": "eefcb363",
               "workerId": "h8723h7d"
          }
     },

     {
          "id": "6bbb1d50",
          "type": "FraudCheckStarted",
          "message": "The given credit card details are being checked against fraud for payment 'eefcb363' by worker 'h8723h7d'.",
          "happenedAt": "2016-07-30T11:05:28.779Z"
          "details": {
               "paymentId": "eefcb363",
               "workerId": "h8723h7d"
          }
     },

     {
          "id": "f9e09a83",
          "type": "ProgressUpdated",
          "message": "40% of the payment 'eefcb363' has been processed by worker 'h8723h7d'.",
          "happenedAt": "2016-07-30T11:05:29.892Z"
          "details": {
               "paymentId": "eefcb363",
               "workerId": "h8723h7d",
               "percentage": 40
          }
     }
]

This endpoint allows the client to pull the events that have already happened, and the client can mix these up with the events that it receives through the notification channel where the events are pushed in the same format.

With this approach, suggested flow for the client is to follow the below steps in order to get 100% correctness on the progress and events:

  • User starts an operation by sending a POST/PUT/PATCH request to an HTTP endpoint.
  • Server sends the 202 (Accepted) response and includes a some sort of operation identifier (which is 'eefcb363' in our case) on response body.
  • Client subscribes to a notification channel based on that operation identifier.
  • Client starts receiving events on that channel and puts them on a buffer list.
  • Client then sends a GET request based on the operation id to get all the events which have happened so far and puts them into the buffer list (only the ones that are not already in there).
  • Client can now start displaying the events from the buffer to the user (optionally in chronological order) and keeps receiving events through the notification channel; updates the UI simultaneously.

Obviously, at any display stage, the client needs to honor that there are special event types which give an indication that the operation has ended. The client needs to know about these types and handle them specially (e.g. show the completed view, etc.). Besides this, whenever the client receives an event which corresponds to operation end event, it should unsubscribe from the notification channel for that specific operation.

However, the situation can be a bit easier if you only care about percentage progress. If that's case, you may still want to persist all events but you only care about the latest event on the client. This makes your life easier and if you decide that your want to be more transparent about showing the progress, you can as you already have all the data for this. It is just a matter of exposing them with the above approach.

Conclusion

Under any circumstances, I would suggest you to persist operation events (you can even go further with event sourcing and make this a natural process). However, your use case may not require extensive and transparent progress reporting through the client. If that's the case, it will certainly make your implementation a lot simpler. However, you can change your mind later and go with a different progress reporting approach since you have been already persisting all the events.

Finally, please share your thoughts on this if you see a better way of handling this.

Profiling ASP.NET 5 Applications with ANTS Performance Profiler

ANTS Performance Profiler from Redgate supports ASP.NET 5 applications running on DNX and it allows you to profile your ASP.NET 5 applications to spot performance problems in a really easy and unobtrusive way. In this blog post, I will show you how it can help you with a sample.
2015-10-20 19:16
Tugberk Ugurlu


One of the biggest challenges with ASP.NET 5 is the fact that most of the existing tools and libraries are not supporting the ecosystem as it’s a complete revamp of the runtime and some of the concept we have been used to in both development and production. However, some tools catch up quite surprisingly fast and one of those tools is ANTS Performance Profiler from Redgate and it allows you to profile your ASP.NET 5 applications to spot performance problems in a really easy and unobtrusive way.

ANTS Performance Profiler is a paid tool but it has a 14-day free trial option. Also, if you are a Microsoft MVP, you can get a free license for this tool. If this sounds like you, visit Redgate community page and claim your free license by following the instructions there.

When you open up ANTS .NET Performance Profiler, you should immediately notice the ASP.NET 5 section:

image

From there, it’s all pretty straight forward to move along. Just select the project.json file path for your application and the DNX runtime that your application should run on. Optionally, you can configure other profiling options like collecting additional performance counters. When you are ready, you can just hit the "Start profiling" button and go!

2015-10-20_19-31-35

One feature that I am in love with here is incoming HTTP requests view. The tool knows about received requests during the profiling session and you can see where they started on the class stack:

image

You can see that you are able to see the hint count and average times for request to complete. From here, I can jump straight to the HTTP request that I am interested in:

image

Notice that it starts with ErrorHandlerMiddleware on the call tree. The reason is that it’s the first middleware in my middleware chain and I configured ANTS Performance Profiler to only show me the methods with source as you can see at top on the screenshot above. So, it filtered the call stack a little for me and I can look at the source of the actual methods if I want to:

image

Profiling Bottlenecks

ANTS Performance Profiler makes it extremely easy to spot bottlenecks in your applications. I will show you a sample based on a personal experience I had before but I will fake the problem here a little :) However, the narrative is quite similar to what I went through but I struggled a lot more that this as I didn’t use a profiling tool to understand the problem.

In my simple sample, I have an object which I know that it will take a bit time to construct because of its nature and I am fine with that. The good news is that this object is thread safe and I can share it among multiple requests. However, I am seeing big CPU spikes when I run my application. To spot the problem, I ran a profiling session with ANTS Performance Profiler. Right after stopping the profiling session, I landed on the problem. So easy and that makes me perfectly happy!

image

I know that the constructor could have been the problem but I really didn’t expect it to be this much. I know that it takes around ~2 seconds to be constructed under a good functioning CPU but when I look at the actual time spent there, it’s around ~32 seconds:

image

So, that’s not normal at all and as soon as I saw that number, I turned my eyes to hit count. There, I saw that it has been called 9 times! That’s way more than expected as I only wanted a single object for the whole application. I am only constructing this object through my IoC container and in this case, it should only mean that my IoC container has been configured wrong. When I look at the configuration of my services, I saw the actual problem:

image

This should have been registered as singleton but as you can see, it is scoped per request which causes it to be constructed for each request. It is not like I wouldn’t be able to spot this without the profiling tool but it makes it so much easy and fast. I actually wanted to try this as a sample because I had the first hand experience on this problem. Believe me! Spotting the exact problem was so much harder through debugging!

OK, if we go back to our issue again: changing the dependency registration to make it singleton solves the problem nicely:

image

Check out ANTS Performance Profiler documentation for more information about this tool. The other perfect resource on .NET performance profiling is Practical Performance Profiling book which you can download the PDF version for free.

Happy profiling .NET people :)

Software Applications Should Work Like Restaurants

This is a brain dump blog post which I usually don't do but I needed to get this out of my chest. Restaurants and software applications have some common characteristics in terms of how they need to work and this post highlights some of them.
2015-02-14 23:55
Tugberk Ugurlu


I went to a restaurant today and one particular thing struck me there. It made me really interested in writing this brain dump blog post. It was about the fact that restaurants and software applications have a lot in common in terms of how they (should) function. One common characteristic I know is coming from very clever and kind Steve Sanderson on his talk on asynchronous web applications and I won’t go into details on that. I would like to focus on the other common functional characteristic I noticed today but before that, here is a brief background of the story :)

After I walked through the door and sit on a table, the next thing I did was order beers and some onion rings. Then, I went to my table with beers and nearly less than 2 minutes later, the onion rings were in front of me. All looks perfect. These were well-cooked, yummy onion rings.

2015-02-14 18.53.11

After a few sips and finishing the onion rings, I ordered my main course which was a medium-well cooked, 20oz steak. It took about 10 to 15 minutes for that to arrive. At the end of the day, I was happy. It was all OK and I didn’t mind waiting for the steak more than onion rings because it made sense.

Why am I explaining all these? Because this is how most of the software applications should function, especially the HTTP based applications. The typical software application has to deal with data which is represented through the domain of the application. One way or the other, software application users interact with this data. The important fact here is that not all of this data is the same; just like the onion rings and steak were not the same for the restaurant. So, you shouldn’t make the assumption of serving all of the data in the same manner through your software application.

Onion Rings Case

Let’s go back to my restaurant example. It was obvious to me that the restaurant fried the rings before they are being ordered. They were doing this because they know the general order rate for the onion rings. So, they were making a judgment call and were frying the rings before they are being ordered. This raises the the risk of serving the onion rings to each customer with a different heat and freshness level. However, this doesn’t matter that much because:

  1. The restaurant has an expected number in mind and they were not going crazy about this (like frying millions of onion rings). So, they were holding the balance.
  2. The first priority here was to serve good product. Second thing here is to solve the problem of serving the rings fast and spending the least amount of staff resources. All of these two were possible to achieve at the same time.

The same concept they applied there should be applied in your applications and I bet most of us doing this already. In some cases, users just don’t care about the accuracy and the freshness of the application data. Take the Foursquare example. Do I care if I don’t see the newly-created coffee shop inside my search results when I try to find the nearest coffee shops? You mostly don’t as it probably gives you other alternatives which definitely helps you at the time. Would you care if the search result was returned after 30 seconds of waiting? I bet you would be most certainly pissed about it.

Steak Case

If the onion ring case is a very well success, why didn’t they apply the same concept for the steak and serve it as fast as rings? The answer is very simple: serving the steak fast doesn’t make the customer happy if the steak is not cooked in a way the customer wants. The customer wants to tune the 'doneness' level of the steak and as long as they have it, it won’t matter if the serving time takes longer than the onion rings. Similar case applies to software applications, too. For instance, I would like to see my orders on amazon accurately all the time. Making them inaccurate just to serve the data fast would make me unhappy because I don’t care about how fast you are as long as it is a tolerable amount.

Conclusion

Let’s not pretend that every piece of data for our software application is the same. Well-tune the requirements, consider the trade offs and come up with a solid, functional plan to serve your data instead of blindly getting them out through the same door. If you are interested in this, make sure to read on Eventually Consistency. Also, looking into Domain Driven Design is another thing I would recommend.

Finally, pointing to something new or making a point on what we generally do wrong is not the idea of the post here. It was about explaining the concept with a unique example.

Getting Started with ASP.NET vNext by Setting Up the Environment From Scratch

In this post, I'll walk you through how you can set up your environment from scratch to get going with ASP.NET vNext.
2014-09-28 19:08
Tugberk Ugurlu


I'm guessing that you already heard the news about ASP.NET vNext.  It has been announced publicly a few months back at TechEd North America 2014 and it's being rewritten from the ground up which means: "Say goodbye to our dear System.Web.dll" :) No kidding, I'm pretty serious :) It brings lots of improvements which will take the application and development performance to the next level for us, .NET developers. ASP.NET vNext is coming so hard and there are already good amount of resources for you to dig into. If you haven't done this yet, navigate to at the bottom of this post to go through the links under Resources.  However, I strongly suggest you to check Daniel Roth’s talk on ASP.NET vNext at TechEd New Zealand 2014 first which is probably the best introduction talk on ASP.NET vNext.

What I would like to go through in this post is how you can set up your environment from scratch to get going with ASP.NET vNext. I also tnhink that this post is useful for understanding key concepts behind this new development environment.

First of all, you need to install kvm which stands for k Version Manager. You can install kvm by running the below command on your command prompt in Windows.

@powershell -NoProfile -ExecutionPolicy unrestricted -Command "iex ((new-object net.webclient).DownloadString('https://raw.githubusercontent.com/aspnet/Home/master/kvminstall.ps1'))"

This will invoke an elevated PowerShell command prompt and installs a few stuff on your machine. Actually, all the things that this command installs are under %userprofile%\.kre directory.

image

Now, install the latest available KRuntime environment. You will do this by running kvm upgrade command:

image

The latest version is installed from the default feed (which is https://www.myget.org/F/aspnetmaster/api/v2 at the time of this writing). We can verify that the K environment is really installed by running kvm list which will list installed k environments with their associated information.

image

Here, we only have good old desktop CLR. If we want to work against CoreCLR (a.k.a K10), we should install it using the –svrc50 switch:

image

You can switch between versions using the "kvm use" command. You can also use –p switch to persist your choice of the runtime and that would allow you to specify your default runtime choice which will live between processes.

image

Our system is ready for ASP.NET vNext development. I have a tiny working AngularJS application that you can find here in my GitHub repository. This was a pure HTML + CSS + JavaScript web application which required no backend system. However, at some point I needed some backend functionality. So, I integrated with ASP.NET vNext. Here is how I did it:

First, we need to specify the NuGet feeds that we will be using for our application. Go ahead and place the the following content into NuGet.config file inside the root of your solution:

<?xml version="1.0" encoding="utf-8"?>
<configuration>
  <packageSources>
      <clear />
      <add key="AspNetVNext" value="https://www.myget.org/F/aspnetrelease/api/v2" />
      <add key="NuGet.org" value="https://nuget.org/api/v2/" />
  </packageSources>
</configuration>

Later, have the project.json inside your application directory. This will have your dependencies and commands. It can contain more but we will just have those for this post:

{
    "dependencies": {
        "Kestrel": "1.0.0-alpha3",
        "Microsoft.AspNet.Diagnostics": "1.0.0-alpha3",
        "Microsoft.AspNet.Hosting": "1.0.0-alpha3",
        "Microsoft.AspNet.Mvc": "6.0.0-alpha3",
        "Microsoft.AspNet.Server.WebListener": "1.0.0-alpha3"
    },
    
    "commands": { 
        "web": "Microsoft.AspNet.Hosting --server Microsoft.AspNet.Server.WebListener --server.urls http://localhost:5001",
        "kestrel": "Microsoft.AspNet.Hosting --server Kestrel --server.urls http://localhost:5004"
    },
    "frameworks": {
        "net45": {},
        "k10": {}
    }
}

With the runtime instillation, you can have a few things and one of them is kpm tool which allows you to manage your packages. You can think of this as NuGet (indeed, it uses NuGet behind the scenes). However, it knows how to read your project.json file and installs the packages according to that. If you call kpm now, you can see the options it give you:

As you have the project.json ready, you can now run kpm restore:

image

Note that the restore output is a little different if you are using an alpha4 release:

image

Based on the commands available in your project.json file, you can run the command to fire up your application now.

image

Also, you can run "set KRE_TRACE=1" before running your command to see diagnostic details about the process if you need to:

image

My little app is now running:

image

Resources

Tags