Owin.Dependencies: An IoC Container Adapter Into OWIN Pipeline

Owin.Dependencies is an IoC container adapter into OWIN pipeline. This post will walk you through the Autofac IoC container implementation and ASP.NET Web API framework adapter for OWIN dependencies.
2013-09-06 07:33
Tugberk Ugurlu


Update on 2013-11-05:

The project has been renamed from Owin.Dependencies to DotNetDoodle.Owin.Dependencies.

Dependency inject is the design pattern that most frameworks in .NET ecosystem have a first class support: ASP.NET Web API, ASP.NET MVC, ASP.NET SignalR, NancyFx and so on. Also, the most IoC container implementations for these frameworks allow us to have a per-request lifetime scope which is nearly always what we want. This's great till we stay inside the borders of our little framework. However, with OWIN, our pipeline is much more extended. I have tried to give some insight on OWIN and its Microsoft implementation: Katana. Also, Ali has pointed out some really important facts about OWIN on his post. Flip also has really great content on OWIN. Don't miss the good stuff from Damian Hickey on his blog and his GitHub repository.

Inside this extended pipeline, the request goes through several middlewares and one of these middlewares may get us into a specific framework. At that point, we will be tied to framework's dependency injection layer. This means that all of your instances you have created inside the previous middlewares (assuming you have a different IoC container there) basically are no good inside your framework's pipeline. This issue had been bitten me and I was searching for a way to at least work around this annoying problem. Then I ended up talking to @randompunter and the idea cam out:

Actually, the idea is so simple:

  • Have a middleware which starts a per-request lifetime scope at the very beginning inside the OWIN pipeline.
  • Then, dispose the scope at the very end when the pipeline ends.

This idea made me create the Owin.Dependencies project. Owin.Dependencies is an IoC container adapter into OWIN pipeline. The core assembly just includes a few interfaces and extension methods. That's all. As expected, it has a pre-release NuGet package, too:

PM> Install-Package DotNetDoodle.Owin.Dependencies -Pre

Besides the Owin.Dependencies package, I have pushed two different packages and I expect more to come (from and from you, the bright .NET developer). Basically, we have two things to worry about here and these two parts are decoupled from each other:

  • IoC container implementation for Owin.Dependencies
  • The framework adapter for Owin.Dependencies

The IoC container implementation is very straightforward. You just implement two interfaces for that: IOwinDependencyResolver and IOwinDependencyScope. I stole the dependency resolver pattern that ASP.NET Web API has been using but this may change over time as I get feedback. My first OWIN IoC container implementation is for Autofac and it has a seperate NuGet package, too:

PM> Install-Package DotNetDoodle.Owin.Dependencies.Autofac -Pre

The usage of the OWIN IoC container implementation is very simple. You just need to have an instance of the IOwinDependencyResolver implementation and you need to pass that along the UseDependencyResolver extension method on the IAppBuilder interface. Make sure to call the UseDependencyResolver before registering any other middlewares if you want to have the per-request dependency scope available on all middlewares. Here is a very simple Startup class after installing the Owin.Dependencies.Autofac package:

public class Startup
{
    public void Configuration(IAppBuilder app)
    {
        IContainer container = RegisterServices();
        AutofacOwinDependencyResolver resolver = new AutofacOwinDependencyResolver(container);

        app.UseDependencyResolver(resolver)
            .Use<RandomTextMiddleware>();
    }

    public IContainer RegisterServices()
    {
        ContainerBuilder builder = new ContainerBuilder();

        builder.RegisterType<Repository>()
                .As<IRepository>()
                .InstancePerLifetimeScope();

        return builder.Build();
    }
}

After registering the dependency resolver into the OWIN pipeline, we registered our custom middleware called "RandomTextMiddleware". This middleware has been built using a handy abstract class called "OwinMiddleware" from Microsoft.Owin package. The Invoke method of the OwinMiddleware class will be invoked on each request and we can decide there whether to handle the request, pass the request to the next middleware or do the both. The Invoke method gets an IOwinContext instance and we can get to the per-request dependency scope through the IOwinContext instance. Here is the code:

public class RandomTextMiddleware : OwinMiddleware
{
    public RandomTextMiddleware(OwinMiddleware next)
        : base(next)
    {
    }

    public override async Task Invoke(IOwinContext context)
    {
        IOwinDependencyScope dependencyScope = 
            context.GetRequestDependencyScope();
            
        IRepository repository = 
            dependencyScope.GetService(typeof(IRepository)) 
                as IRepository;

        if (context.Request.Path == "/random")
        {
            await context.Response.WriteAsync(
                repository.GetRandomText()
            );
        }
        else
        {
            context.Response.Headers.Add(
                "X-Random-Sentence", 
                new[] { repository.GetRandomText() });
                
            await Next.Invoke(context);
        }
    }
}

I accept that we kind of have an anti-pattern here: Service Locator. It would be really nice to get the object instances injected as the method parameters but couldn't manage to figure out how to do that for now. However, this will be my next try. Here, we get an implementation of the IRepository and it is disposable. When we invoke this little pipeline now, we will see that the disposition will be handled by the infrastructure provided by the Owin.Dependencies implementation.

image

So far so good but what happens when we need to integrate with a specific framework which has its own dependency injection implementation such as ASP.NET Web API? This is where the framework specific adapters come. I provided the APS.NET Web API adapter and it has its own NuGet package which depends on the Microsoft.AspNet.WebApi.Owin package.

PM> Install-Package DotNetDoodle.Owin.Dependencies.Adapters.WebApi -Pre

Beside this package, I also installed the Autofac.WebApi5 pre-release package to register the API controllers inside the Autofac ContainerBuilder. Here is the modified Startup class to integrate with ASP.NET Web API:

public class Startup
{
    public void Configuration(IAppBuilder app)
    {
        IContainer container = RegisterServices();
        AutofacOwinDependencyResolver resolver = 
            new AutofacOwinDependencyResolver(container);

        HttpConfiguration config = new HttpConfiguration();
        config.Routes.MapHttpRoute("DefaultHttpRoute", "api/{controller}");

        app.UseDependencyResolver(resolver)
           .Use<RandomTextMiddleware>()
           .UseWebApiWithOwinDependencyResolver(resolver, config);
    }

    public IContainer RegisterServices()
    {
        ContainerBuilder builder = new ContainerBuilder();

        builder.RegisterApiControllers(Assembly.GetExecutingAssembly());
        builder.RegisterType<Repository>()
               .As<IRepository>()
               .InstancePerLifetimeScope();

        return builder.Build();
    }
}

I also added an ASP.NET Web API controller class to serve all the texts I have. As ASP.NET Web API has a fist class DI support and our Web API adapter handles all the things for us, we can inject our dependencies through the controller constructor.

public class TextsController : ApiController
{
    private readonly IRepository _repo;

    public TextsController(IRepository repo)
    {
        _repo = repo;
    }

    public IEnumerable<string> Get()
    {
        return _repo.GetTexts();
    }
}

Now, when we send a request to /api/texts, the IRepository implementation is called twice: once from our custom middleware and once from the Web API's TextsController. At the end of the request, the instance is disposed.

image

The source code of this sample is available in the Owin.Dependencies repository on GitHub. I agree that there might be some hiccups with this implementation and I expect to have more stable solution in near future. Have fun with this little gem and give your feedback :)

Microsoft Turkey Summer School Presentation Samples and Links for .NET Web Stack

I was at Microsoft's Turkey headquarters giving talks on Microsoft Web Stack for Microsoft Summer School and here are presentation samples and links for .NET Web Stack
2013-08-25 15:09
Tugberk Ugurlu


This week, I was at Microsoft's Turkey headquarters giving talks on Microsoft Web Stack three days in a row for Microsoft Summer School. It was a really good opportunity to give information to MSPs about latest Microsoft web technologies. During these three days, I shared a lot of code and resources to get up to speed on web and web services development. In this post, I will share the links to those resources and code repositories.

Code Samples

Microsoft Web Stack

ASP.NET MVC

ASP.NET SignalR

ASP.NET Web API

OWIN and Katana

Turkish I Problem on RavenDB and Solving It with Custom Lucene Analyzers

Yesterday, I ran into a Turkish I problem on RavenDB and here is how I solved It with a custom Lucene analyzer
2013-07-16 14:37
Tugberk Ugurlu


RavenDB, by default, uses a custom Lucene.Net analyzer named LowerCaseKeywordAnalyzer and it makes all your queries case-insensitive which is what I would expect. For example, the following query find the User whose name property is set to "TuGbErK":

class Program
{
     static void Main(string[] args)
     {
          const string DefaultDatabase = "EqualsTryOut";
          IDocumentStore store = new DocumentStore
          {
               Url = "http://localhost:8080",
               DefaultDatabase = DefaultDatabase
          }.Initialize();
          store.DatabaseCommands.EnsureDatabaseExists(DefaultDatabase);
          using (var ses = store.OpenSession())
          {
               var user = new User { 
                  Name = "TuGbErK", 
                  Roles = new List<string> { "adMin", "GuEst" } 
               };
               
               ses.Store(user);
               ses.SaveChanges();
               //this finds name:TuGbErK
               var user1 = ses.Query<User>()
                  .Where(usr => usr.Name == "tugberk")
                  .FirstOrDefault();
          }
     }
}
public class User
{
     public string Id { get; set; }
     public string Name { get; set; }
     public ICollection<string> Roles { get; set; }
}

The problem starts appearing here where you have a situation that requires you to store Turkish text. You may ask why at this point, which makes sense. The problem is related to well-known Turkish "I" problem. Let's try to produce this problem with an example on RavenDB.

class Program
{
    static void Main(string[] args)
    {
        const string DefaultDatabase = "EqualsTryOut";
        IDocumentStore store = new DocumentStore
        {
            Url = "http://localhost:8080",
            DefaultDatabase = DefaultDatabase
        }.Initialize();

        store.DatabaseCommands.EnsureDatabaseExists(DefaultDatabase);

        using (var ses = store.OpenSession())
        {
            var user = new User { 
                Name = "Irmak", 
                Roles = new List<string> { "adMin", "GuEst" } 
            };
            
            ses.Store(user);
            ses.SaveChanges();

            //This fails dues to Turkish I
            var user1 = ses.Query<User>()
                .Where(usr => usr.Name == "irmak")
                .FirstOrDefault();

            //this finds name:Irmak
            var user2 = ses.Query<User>()
                .Where(usr => usr.Name == "IrMak")
                .FirstOrDefault();
        }
    }
}

public class User
{
    public string Id { get; set; }
    public string Name { get; set; }
    public ICollection<string> Roles { get; set; }
}

Here, we have the same code but the Name value we are storing is different: Irmak. Irmak is a Turkish name which also means "river" in English (which is totally not the point here) and it starts with the Turkish letter "I". The lowercased version of this letter is "ı" and this is where the problem arises because if you are lowercasing this character in an invariant culture manner, it will be transformed as "i", not "ı". This is what RavenDB is doing with its LowerCaseKeywordAnalyzer and that's why we can't find anything with the first query above where we searched against "ırmak". In the second query, we can find the User that we are looking for as it will be lowercased into "irmak".

The Solution with a Custom Analyzer

The default analyzer that RavenDB using is the LowerCaseKeywordAnalyzer and it uses the LowerCaseKeywordTokenizer as its tokenizer. Inside that tokenizer, you will see the Normalize method which is used to lowercase a character in an invariant culture manner which causes a problem on our case here. AFAIK, there is no built in Lucene.Net tokenizer which handles Turkish characters well (I might be wrong here). So, I decided to modify the LowerCaseKeywordTokenizer according to my specific needs. It was a very naive and minor change which worked but not sure if I handled it well. You can find the source code of the TutkishLowerCaseKeywordTokenizer and TurkishLowerCaseKeywordAnalyzer classes on my Github repository.

Using a Custom Build Analyzer on RavenDB

RavenDB allows us to use custom analyzers and control the analyzer per-field. If you're going to use the built-in Lucene analyzer for a field, you can need to pass the FullName of the analyzer type just like in the below example which is a straight copy and paste from the RavenDB documentation.

store.DatabaseCommands.PutIndex(
    "Movies",
    new IndexDefinition
        {
            Map = "from movie in docs.Movies select new { movie.Name, movie.Tagline }",
            Analyzers =
                {
                    { "Name", typeof(SimpleAnalyzer).FullName },
                    { "Tagline", typeof(StopAnalyzer).FullName },
                }
        });

On the other hand, RavenDB also allows us to drop our own custom analyzers in:

"You can also create your own custom analyzer, compile it to a dll and drop it in in directory called "Analyzers" under the RavenDB base directory. Afterward, you can then use the fully qualified type name of your custom analyzer as the analyzer for a particular field."

There are couple things that you need to be careful of when going down this road:

After I stopped my RavenDB server, I dropped my assembly, which contains the TurkishLowerCaseKeywordAnalyzer, into the "Analyzers" folder under the RavenDB base directory. At the client side, here is my code which consists of the index creation and the query:

class Program
{
    static void Main(string[] args)
    {
        const string DefaultDatabase = "EqualsTryOut";
        IDocumentStore store = new DocumentStore
        {
            Url = "http://localhost:8080",
            DefaultDatabase = DefaultDatabase
        }.Initialize();

        IndexCreation.CreateIndexes(typeof(Users).Assembly, store);
        store.DatabaseCommands.EnsureDatabaseExists(DefaultDatabase);

        using (var ses = store.OpenSession())
        {
            var user = new User { 
                Name = "Irmak", 
                Roles = new List<string> { "adMin", "GuEst" } 
            };
            ses.Store(user);
            ses.SaveChanges();

            //this finds name:Irmak
            var user1 = ses.Query<User, Users>()
                .Where(usr => usr.Name == "irmak")
                .FirstOrDefault();
        }
    }
}

public class User
{
    public string Id { get; set; }
    public string Name { get; set; }
    public ICollection<string> Roles { get; set; }
}

public class Users : AbstractIndexCreationTask<User>
{
    public Users()
    {
        Map = users => from user in users
                       select new 
                       {
                          user.Name 
                       };

        Analyzers.Add(
            x => x.Name, 
            typeof(LuceneAnalyzers.TurkishLowerCaseKeywordAnalyzer)
                .AssemblyQualifiedName);
    }
}

It worked like a charm. I'm hopping this helps you solve this annoying problem and please post your comment if you know a better way of handling this.

Resources

Scaling out SignalR with a Redis Backplane and Testing It with IIS Express

Learn how easy to scale out SignalR with a Redis backplane and simulate a local web farm scenario with IIS Express
2013-07-02 11:01
Tugberk Ugurlu


SignalR was built with scale out in mind from day one and they ship some scale out providers such as Redis, SQL Server and Windows Azure Service Bus. There is a really nice documentation series on this at official ASP.NET SignalR web site and you can find Redis, Windows Azure Service Bus and SQL Server samples there. In this quick post, I would like to show you how easy is to get SignalR up and running in a scale out scenario with a Redis backplane.

Sample Chat Application

First of all, I have a very simple and stupid real-time web application. The source code is also available on GitHub if you are interested in: RedisScaleOutSample. Guess what it is? Yes, you’re right. It’s a chat application :) I’m using SignalR 2.0.0-beta2 on this sample and here is how my hub looks like:

public class ChatHub : Hub
{
    public void Send(string message)
    {
        Clients.All.messageReceived(message);
    }
}

A very simple hub implementation. Now, let’s look at the entire HTML and JavaScript code that I have:

<!DOCTYPE html>
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
    <title>Chat Sample</title>
</head>
<body>
    <div>
        <input type="text" id="msg" /> 
        <button type="button" id="send">Send</button>
    </div>
    <ul id="messages"></ul>

    <script src="/Scripts/jquery-1.6.4.min.js"></script>
    <script src="/Scripts/jquery.signalR-2.0.0-beta2.min.js"></script>
    <script src="/signalr/hubs"></script>
    <script>
        (function () {

            var chatHub = $.connection.chatHub,
                msgContainer = $('#messages');

            chatHub.client.messageReceived = function (msg) {
                $('<li>').text(msg).appendTo(msgContainer);
            };

            $.connection.hub.start().done(function () {

                $('#send').click(function () {
                    var msg = $('#msg').val();
                    chatHub.server.send(msg);
                });
            });
        }());
    </script>
</body>
</html>

When I run the application, I can see that it works like a charm:

1

This’s a single machine scenario and if we want to move this application to multiple VMs, a Web Farm or whatever your choice of scaling out your application, you will see that your application misbehaving. The reason is very simple to understand actually. Let’s try to understand why.

Understanding the Need of Backplane

Assume that you have two VMs for your super chat application: VM-1 and VM-2. The client-a comes to your application and your load balancer routes that request to VM-1. As your SignalR connection will be persisted as long as it can be, you will be connected to VM-1 for any messages you receive (assuming you are not on Long Pooling transport) and send (if you are on Web Sockets). Then, client-b comes to your application and the load balancer routes that request to VM-2 this time. What happens now? Any messages that client-a sends will not be received by client-b because they are on different nodes and SignalR has no idea about any other node except that it’s executing on.

To demonstrate this scenario easily in our development environment, I will fire up the same application in different ports through IIS Express with the following script:

function programfiles-dir {
    if (is64bit -eq $true) {
        (Get-Item "Env:ProgramFiles(x86)").Value
    } else {
        (Get-Item "Env:ProgramFiles").Value
    }
}

function is64bit() {
    return ([IntPtr]::Size -eq 8)
}

$executingPath = (Split-Path -parent $MyInvocation.MyCommand.Definition)
$appPPath = (join-path $executingPath "RedisScaleOutSample")
$iisExpress = "$(programfiles-dir)\IIS Express\iisexpress.exe"
$args1 = "/path:$appPPath /port:9090 /clr:v4.0"
$args2 = "/path:$appPPath /port:9091 /clr:v4.0"

start-process $iisExpress $args1 -windowstyle Normal
start-process $iisExpress $args2 -windowstyle Normal

I’m running IIS Express here from the command line and it’s a very powerful feature if you ask me. When you execute the following script (which is run.ps1 in my sample application), you will have the chat application running on localhost:9090 and localhost:9091:

2

When we try to same scenario now by connecting both endpoints, you will see that it’s not working as it should be:

3

SignalR makes it really easy to solve this type of problems. In its core architecture, SignalR uses a pub/sub mechanism to broadcast the messages. Every message in SignalR goes through the message bus and by default, SignalR uses Microsoft.AspNet.SignalR.Messaging.MessageBus which implements IMessageBus as its in-memory message bus. However, this’s fully replaceable and it’s where you need to plug your own message bus implementation for your scale out scenarios. SignalR team provides bunch of backplanes for you to work with but if you can totally implement your own if none of the scale-out providers that SignalR team is providing is not enough for you. For instance, the community has a RabbitMQ message bus implementation for SignalR: SignalR.RabbitMq.

Hooking up Redis Backplane to Your SignalR Application

In order to test configure using Redis as the backplane for SignalR, we need to have a Redis server up and running somewhere. The Redis project does not directly support Windows but Microsoft Open Tech provides the Redis Windows port which targets both x86 and x64 bit architectures. The better news is that they distribute the binaries through NuGet: http://nuget.org/packages/Redis-64.

4

Now I have Redis binaries, I can get the Redis server up. For our demonstration purposes, running the redis-server.exe without any arguments with the default configuration should be enough:

5

The Redis server is running on port 6379 now and we can configure SignalR to use Redis as its backplane. First thing to do is to install the SignalR Redis Messaging Backplane NuGet package. As I’m using the SignalR 2.0.0-beta2, I will install the version 2.0.0-beta2 of Microsoft.AspNet.SignalR.Redis package.

6

Last thing to do is to write a one line of code to replace the IMessageBus implementation:

public class Startup
{
    public void Configuration(IAppBuilder app)
    {
        GlobalHost.DependencyResolver
            .UseRedis("localhost", 6379, string.Empty, "myApp");

        app.MapHubs();
    }
}

The parameters we are passing into the UseRedis method are related to your Redis server. For our case here, we don’t have any password and that’s why we passed string.Empty. Now, let’s compile the application and run the same PowerShell script now to stand up two endpoints which simulates a web farm scenario in your development environment. When we navigate the both endpoints, we will see that messages are broadcasted to all nodes no matter which node they arrive:

7

That was insanely easy to implement, isn’t it.

A Few Things to Keep in Mind

The purpose of the SignalR’s backplane approach is to enable you to serve more clients in cases where one server is becoming your bottleneck. As you can imagine, having a backplane for your SignalR application can affect the message throughput as your messages need to go through the backplane first and distributed from there to all subscribers. For high-frequency real-time applications, such as real-time games, a backplane is not recommended. For those cases, cleverer load balancers are what you would want. Damian Edwards has talked about SignalR and different scale out cases on his Build 2013 talk and I strongly recommend you to check that out if you are interested in.

Getting Started With OWIN and the Katana Stack

OWIN and Katana is best way to build web server indipendent web applications in .NET ecosystem and this post will help you a bit on getting started.
2013-05-04 18:04
Tugberk Ugurlu


As usual, I tweeted about my excitement which finally led me to this blog post:

OWIN and Katana project is the effort moving towards that direction and even through it's at its early stages, the adoption is promising. Katana is a flexible set of components for building and hosting OWIN-based web applications. This is the definition that I pulled from its Codeplex site. The stack includes several so-called OWIN middlewares, server and host implementations which work with OWIN-based web applications. You can literally get your application going within no time without needing any installations on the machine other than .NET itself. The other benefit is that your application is not tied to one web server; you can choose any of the web server and host implementations at any time without needing to recompile your project's code.

To get started with Katana today, the best way is to jump to your Visual Studio and navigate to Tools > Library Package Manager > Package Manager Settings. From there, navigate to Package Sources and add the following sources:

Now, we should be able to see those sources through PMC (Package Manager Console):

image

We performed these actions because latest bits of the Katana and OWIN Hosting project are pushed to MyGet and those packages are what you want to work with for now. As you can guess, those packages are not stable and not meant to be for production use but good for demonstration cases :) Let's start writing some code and see the beauty.

I started by creating an empty C# Class Library project. Before moving forward, I would like to take a step back and see what packages I have. I selected the MyGet Owin as the current package source and executed the following command: Get-Package -ListAvailable -pre

image

These packages are coming from the OWIN Hosting project and I encourage you to check the source code out. Let's do the same for the MyGet Katana source:

image

We got more packages this time and these packages are coming from the Katana project which is hosted on Codeplex. These packages consist of OWIN middlewares, host and server implementations which we will have a chance to use some of them now.

Let's start installing a host implementation: Microsoft.Owin.Host.HttpListener pre-release package. Now, change the current package source selection to MyGet OWIN and install Owin.Extensions package to be able to get the necessary bits and pieces to complete our demo.

The Owin.Extensions package will bring down another package named Owin and that Owin package is the only necessary package to have actually. The others are just there to help us but as you can understand, there is no assembly hell involved when working with OWIN. In fact, the Owin.dll only contains one interface which is IAppBuilder. You may wonder how this thing even works then. The answer is simple actually: by convention and discoverability on pure .NET types. To get a more in depth answer on that question, check out Louis DeJardin's awesome talk on OWIN.

What we need to do now is have a class called Startup and that class will have a method called Configuration which takes an IAppBuilder implementation as a parameter.

public partial class Startup {

    public void Configuration(IAppBuilder app) {

        app.UseHandler(async (request, response) => {

            response.ContentType = "text/html";
            await response.WriteAsync("OWIN Hello World!!");
        });
    }
}

For the demonstration purposes, I used the UseHandler extension method to handle the requests and return the responses. In our case above, all paths will return the same response which is kind of silly but OK for demonstration purposes. To run this application, we need to some sort of a glue which needs to tie our Startup class with the host implementation that we have brought down: Microsoft.Owin.Host.HttpListener. That glue is the OwinHost.exe which we can install from the MyGet Katana NuGet feed.

image

OwinHost.exe is going to prepare the settings to host the application and give them to the hosting implementation. Then, it will get out of the way. To make it run, execute the OwinHost.exe without any arguments under the root of your project and you should see screen as below:

image

We got this unfriendly error message because the OwinHost.exe was unable to locate our assemblies as it looks under the bin directory but our project outputs the compiled assemblies under bin\Debug or bin\Release; depending on the configuration. Change the output directory to bin through the Properties menu, rebuild the solution and run the OwinHost.exe again. This time there should be no error and if we navigate to localhost:5000 (as 5000 is the default port), we should see that the response that we have prepared:

image

Cool! You may wonder how it knows to which host implementation to use. The default behavior is the auto-detect but we can explicitly specify the server type as well (however, it's kind of confusing how to do it today):

image

OwinHost.exe is great but as you can guess, it's not our only option. Thanks to Katana project, we can easily host our application on our own process. This option is particularly useful if you would like to deploy your application as a Windows Service or host your application on Windows Azure Worker Role. To demonstrate this option, I created a console application and referenced our assembly. Then, I installed Microsoft.Owin.Hosting package to be able to host it easily. Here is the code to do that:

class Program {

    static void Main(string[] args) {

        using (IDisposable server = WebApp.Start<Startup>()) {

            Console.WriteLine("Started...");
            Console.ReadKey();
        }
    }
}

When I run the console application now, I can navigate to localhost:5000 again and see my response:

image

I plan on writing a few more posts on OWIN-based applications but for now, that's all I can give away :) The code I demoed here is available on GitHub as well: https://github.com/tugberkugurlu/OwinSamples/tree/master/HelloWorldOwin. but run the Build.cmd file before starting.

Note: Earlier today, I was planning to play with ASP.NET Web API's official OWIN host implementation (which will come out in vNext) and writing a blog post on that. The playing part went well but Flip W. was again a Sergeant Buzzkill :s

 

References