Scaling out SignalR with a Redis Backplane and Testing It with IIS Express

Learn how easy to scale out SignalR with a Redis backplane and simulate a local web farm scenario with IIS Express
2013-07-02 11:01
Tugberk Ugurlu


SignalR was built with scale out in mind from day one and they ship some scale out providers such as Redis, SQL Server and Windows Azure Service Bus. There is a really nice documentation series on this at official ASP.NET SignalR web site and you can find Redis, Windows Azure Service Bus and SQL Server samples there. In this quick post, I would like to show you how easy is to get SignalR up and running in a scale out scenario with a Redis backplane.

Sample Chat Application

First of all, I have a very simple and stupid real-time web application. The source code is also available on GitHub if you are interested in: RedisScaleOutSample. Guess what it is? Yes, you’re right. It’s a chat application :) I’m using SignalR 2.0.0-beta2 on this sample and here is how my hub looks like:

public class ChatHub : Hub
{
    public void Send(string message)
    {
        Clients.All.messageReceived(message);
    }
}

A very simple hub implementation. Now, let’s look at the entire HTML and JavaScript code that I have:

<!DOCTYPE html>
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
    <title>Chat Sample</title>
</head>
<body>
    <div>
        <input type="text" id="msg" /> 
        <button type="button" id="send">Send</button>
    </div>
    <ul id="messages"></ul>

    <script src="/Scripts/jquery-1.6.4.min.js"></script>
    <script src="/Scripts/jquery.signalR-2.0.0-beta2.min.js"></script>
    <script src="/signalr/hubs"></script>
    <script>
        (function () {

            var chatHub = $.connection.chatHub,
                msgContainer = $('#messages');

            chatHub.client.messageReceived = function (msg) {
                $('<li>').text(msg).appendTo(msgContainer);
            };

            $.connection.hub.start().done(function () {

                $('#send').click(function () {
                    var msg = $('#msg').val();
                    chatHub.server.send(msg);
                });
            });
        }());
    </script>
</body>
</html>

When I run the application, I can see that it works like a charm:

1

This’s a single machine scenario and if we want to move this application to multiple VMs, a Web Farm or whatever your choice of scaling out your application, you will see that your application misbehaving. The reason is very simple to understand actually. Let’s try to understand why.

Understanding the Need of Backplane

Assume that you have two VMs for your super chat application: VM-1 and VM-2. The client-a comes to your application and your load balancer routes that request to VM-1. As your SignalR connection will be persisted as long as it can be, you will be connected to VM-1 for any messages you receive (assuming you are not on Long Pooling transport) and send (if you are on Web Sockets). Then, client-b comes to your application and the load balancer routes that request to VM-2 this time. What happens now? Any messages that client-a sends will not be received by client-b because they are on different nodes and SignalR has no idea about any other node except that it’s executing on.

To demonstrate this scenario easily in our development environment, I will fire up the same application in different ports through IIS Express with the following script:

function programfiles-dir {
    if (is64bit -eq $true) {
        (Get-Item "Env:ProgramFiles(x86)").Value
    } else {
        (Get-Item "Env:ProgramFiles").Value
    }
}

function is64bit() {
    return ([IntPtr]::Size -eq 8)
}

$executingPath = (Split-Path -parent $MyInvocation.MyCommand.Definition)
$appPPath = (join-path $executingPath "RedisScaleOutSample")
$iisExpress = "$(programfiles-dir)\IIS Express\iisexpress.exe"
$args1 = "/path:$appPPath /port:9090 /clr:v4.0"
$args2 = "/path:$appPPath /port:9091 /clr:v4.0"

start-process $iisExpress $args1 -windowstyle Normal
start-process $iisExpress $args2 -windowstyle Normal

I’m running IIS Express here from the command line and it’s a very powerful feature if you ask me. When you execute the following script (which is run.ps1 in my sample application), you will have the chat application running on localhost:9090 and localhost:9091:

2

When we try to same scenario now by connecting both endpoints, you will see that it’s not working as it should be:

3

SignalR makes it really easy to solve this type of problems. In its core architecture, SignalR uses a pub/sub mechanism to broadcast the messages. Every message in SignalR goes through the message bus and by default, SignalR uses Microsoft.AspNet.SignalR.Messaging.MessageBus which implements IMessageBus as its in-memory message bus. However, this’s fully replaceable and it’s where you need to plug your own message bus implementation for your scale out scenarios. SignalR team provides bunch of backplanes for you to work with but if you can totally implement your own if none of the scale-out providers that SignalR team is providing is not enough for you. For instance, the community has a RabbitMQ message bus implementation for SignalR: SignalR.RabbitMq.

Hooking up Redis Backplane to Your SignalR Application

In order to test configure using Redis as the backplane for SignalR, we need to have a Redis server up and running somewhere. The Redis project does not directly support Windows but Microsoft Open Tech provides the Redis Windows port which targets both x86 and x64 bit architectures. The better news is that they distribute the binaries through NuGet: http://nuget.org/packages/Redis-64.

4

Now I have Redis binaries, I can get the Redis server up. For our demonstration purposes, running the redis-server.exe without any arguments with the default configuration should be enough:

5

The Redis server is running on port 6379 now and we can configure SignalR to use Redis as its backplane. First thing to do is to install the SignalR Redis Messaging Backplane NuGet package. As I’m using the SignalR 2.0.0-beta2, I will install the version 2.0.0-beta2 of Microsoft.AspNet.SignalR.Redis package.

6

Last thing to do is to write a one line of code to replace the IMessageBus implementation:

public class Startup
{
    public void Configuration(IAppBuilder app)
    {
        GlobalHost.DependencyResolver
            .UseRedis("localhost", 6379, string.Empty, "myApp");

        app.MapHubs();
    }
}

The parameters we are passing into the UseRedis method are related to your Redis server. For our case here, we don’t have any password and that’s why we passed string.Empty. Now, let’s compile the application and run the same PowerShell script now to stand up two endpoints which simulates a web farm scenario in your development environment. When we navigate the both endpoints, we will see that messages are broadcasted to all nodes no matter which node they arrive:

7

That was insanely easy to implement, isn’t it.

A Few Things to Keep in Mind

The purpose of the SignalR’s backplane approach is to enable you to serve more clients in cases where one server is becoming your bottleneck. As you can imagine, having a backplane for your SignalR application can affect the message throughput as your messages need to go through the backplane first and distributed from there to all subscribers. For high-frequency real-time applications, such as real-time games, a backplane is not recommended. For those cases, cleverer load balancers are what you would want. Damian Edwards has talked about SignalR and different scale out cases on his Build 2013 talk and I strongly recommend you to check that out if you are interested in.

Getting Started With OWIN and the Katana Stack

OWIN and Katana is best way to build web server indipendent web applications in .NET ecosystem and this post will help you a bit on getting started.
2013-05-04 18:04
Tugberk Ugurlu


As usual, I tweeted about my excitement which finally led me to this blog post:

OWIN and Katana project is the effort moving towards that direction and even through it's at its early stages, the adoption is promising. Katana is a flexible set of components for building and hosting OWIN-based web applications. This is the definition that I pulled from its Codeplex site. The stack includes several so-called OWIN middlewares, server and host implementations which work with OWIN-based web applications. You can literally get your application going within no time without needing any installations on the machine other than .NET itself. The other benefit is that your application is not tied to one web server; you can choose any of the web server and host implementations at any time without needing to recompile your project's code.

To get started with Katana today, the best way is to jump to your Visual Studio and navigate to Tools > Library Package Manager > Package Manager Settings. From there, navigate to Package Sources and add the following sources:

Now, we should be able to see those sources through PMC (Package Manager Console):

image

We performed these actions because latest bits of the Katana and OWIN Hosting project are pushed to MyGet and those packages are what you want to work with for now. As you can guess, those packages are not stable and not meant to be for production use but good for demonstration cases :) Let's start writing some code and see the beauty.

I started by creating an empty C# Class Library project. Before moving forward, I would like to take a step back and see what packages I have. I selected the MyGet Owin as the current package source and executed the following command: Get-Package -ListAvailable -pre

image

These packages are coming from the OWIN Hosting project and I encourage you to check the source code out. Let's do the same for the MyGet Katana source:

image

We got more packages this time and these packages are coming from the Katana project which is hosted on Codeplex. These packages consist of OWIN middlewares, host and server implementations which we will have a chance to use some of them now.

Let's start installing a host implementation: Microsoft.Owin.Host.HttpListener pre-release package. Now, change the current package source selection to MyGet OWIN and install Owin.Extensions package to be able to get the necessary bits and pieces to complete our demo.

The Owin.Extensions package will bring down another package named Owin and that Owin package is the only necessary package to have actually. The others are just there to help us but as you can understand, there is no assembly hell involved when working with OWIN. In fact, the Owin.dll only contains one interface which is IAppBuilder. You may wonder how this thing even works then. The answer is simple actually: by convention and discoverability on pure .NET types. To get a more in depth answer on that question, check out Louis DeJardin's awesome talk on OWIN.

What we need to do now is have a class called Startup and that class will have a method called Configuration which takes an IAppBuilder implementation as a parameter.

public partial class Startup {

    public void Configuration(IAppBuilder app) {

        app.UseHandler(async (request, response) => {

            response.ContentType = "text/html";
            await response.WriteAsync("OWIN Hello World!!");
        });
    }
}

For the demonstration purposes, I used the UseHandler extension method to handle the requests and return the responses. In our case above, all paths will return the same response which is kind of silly but OK for demonstration purposes. To run this application, we need to some sort of a glue which needs to tie our Startup class with the host implementation that we have brought down: Microsoft.Owin.Host.HttpListener. That glue is the OwinHost.exe which we can install from the MyGet Katana NuGet feed.

image

OwinHost.exe is going to prepare the settings to host the application and give them to the hosting implementation. Then, it will get out of the way. To make it run, execute the OwinHost.exe without any arguments under the root of your project and you should see screen as below:

image

We got this unfriendly error message because the OwinHost.exe was unable to locate our assemblies as it looks under the bin directory but our project outputs the compiled assemblies under bin\Debug or bin\Release; depending on the configuration. Change the output directory to bin through the Properties menu, rebuild the solution and run the OwinHost.exe again. This time there should be no error and if we navigate to localhost:5000 (as 5000 is the default port), we should see that the response that we have prepared:

image

Cool! You may wonder how it knows to which host implementation to use. The default behavior is the auto-detect but we can explicitly specify the server type as well (however, it's kind of confusing how to do it today):

image

OwinHost.exe is great but as you can guess, it's not our only option. Thanks to Katana project, we can easily host our application on our own process. This option is particularly useful if you would like to deploy your application as a Windows Service or host your application on Windows Azure Worker Role. To demonstrate this option, I created a console application and referenced our assembly. Then, I installed Microsoft.Owin.Hosting package to be able to host it easily. Here is the code to do that:

class Program {

    static void Main(string[] args) {

        using (IDisposable server = WebApp.Start<Startup>()) {

            Console.WriteLine("Started...");
            Console.ReadKey();
        }
    }
}

When I run the console application now, I can navigate to localhost:5000 again and see my response:

image

I plan on writing a few more posts on OWIN-based applications but for now, that's all I can give away :) The code I demoed here is available on GitHub as well: https://github.com/tugberkugurlu/OwinSamples/tree/master/HelloWorldOwin. but run the Build.cmd file before starting.

Note: Earlier today, I was planning to play with ASP.NET Web API's official OWIN host implementation (which will come out in vNext) and writing a blog post on that. The playing part went well but Flip W. was again a Sergeant Buzzkill :s

 

References

Cleaning up With PowerShell After Removing a Windows Azure Virtual Machine

Removing a Windows Azure Virtual Machine doesn't remove everything for you. This post is all about cleaning up with PowerShell after removing a Windows Azure VM
2013-04-28 14:10
Tugberk Ugurlu


When you remove a Windows Azure virtual machine, you are not actually removing everything. I hear you saying "What!!!"? Let's dive in then Smile

As each VM is kept under a cloud service, that cloud service is one of those things that remain after you remove your VM. Other thing is the disk (or disks, depends on your case) which is attached to your VM. If you didn't attach any data disks, at least your OS disk will remain inside your storage account even if you remove your VM. However, you cannot go and directly delete that VHD file just like a typical blob because there is a lease on that blob as it's still viewed as one of your registered disks.

Let's look at a case of mine on removing a Windows Azure VM. Yesterday, I created two VMs for demonstration purposes on a presentation and after the presentation, I removed those VMs by running "Remove-AzureVM" PowerShell command. So far, so good. When I go to the portal, I don't see those VMs as expected and I don't see any cloud services.

If you create a VM, the new portal is not showing that you have a cloud service but it implicitly creates one for you under the covers. As soon as you attach another VM to that cloud service, the cloud service shows up on the new portal as well. In either of these cases, the generated cloud service is completely visible to you through the REST management API. So, you can view them through PowerShell Cmdlets.

Let's run "Get-AzureService" PowerShell command to see my cloud services:

image

These are the generated cloud services which belong to my removed VMs. As far as I know, these cloud services won't cost me anything as they contain no deployments but it would be nice to remove them as I don't need them.

image

I got rid of the unnecessary cloud services. The next and final step of this cleaning is to remove the OS disks I have. This's actually more important because disk VHD files are stored inside the Windows Azure storage as blobs and each OS disk is approx. 127GB in size. So, I will pay for them as long as I keep them. Through new Windows Azure Storage PowerShell Cmdlets, I can view my VHD container to see which VHD files I have:

image

As you can see, I have two unnecessary VHD files which remained after my VM removal. At this stage, you might think directly deleting these blobs but as mentioned before, it will fail if we try to do that.

image

The error message I got is very descriptive actually: There is currently a lease on the blob and no lease ID was specified in the request. Let's run another PowerShell command to see which disks I have registered:

image

I have those two blobs registered as my disks and this prevents me from directly deleting them which is perfectly reasonable. To prove that this is actually the case, I will view the HTTP headers of one of those disks through CloudBerry Explorer for Azure Blob Storage:

image

You can see that "x-ms-lease-status" header is set to "locked" which states that there is a lease on the blob. Let's remove the "IstBootcampDemoTugberk-IstBootcampVM1-2013-4-27-132" disk from my registered disks:

image

Now, let's look at the HTTP headers of the blob again:

image

The lease is now taken out and it's now possible to delete the blob:

image

However, you can delete the disk from your registered disks list and the storage container in one go by running "Remove-AzureDisk" PowerShell Cmdlet with "DeleteVHD" switch:

image

The VHD file is completely gone. I hope the post was helpful Smile

Windows Azure PowerShell Cmdlets In a Nutshell

Windows Azure PowerShell Cmdlets is a great tool to manage your Windows Azure services but if you are like me, you would wanna know where all the stuff is going. This post is all about it.
2013-04-26 05:05
Tugberk Ugurlu


Managing your Windows Azure services is super easy with the various management options and my favorite among these options is Windows Azure PowerShell Cmdlets. It's very well-documented and if you know PowerShell enough, Windows Azure PowerShell Cmdlets are really easy to grasp. In this post I would like to give a few details about this management option and hopefully, it'll give you a head start.

Install it and get going

Installation of the Windows Azure PowerShell Cmdlets is very easy. It's also well-documented. You can reach the download link from the "Downloads" section on Windows Azure web site. From there, all you need to do is follow the instructions to install the Cmdlets through Web Platform Installer.

After the installation, we can view that we have the Windows Azure PowerShell Cmdlets installed on our system by running "Get-Module -ListAvailable" command:

image

To get you started using Cmdlets, you can see the "Get Started with Windows Azure Cmdlets" article which explains how you will set up the trust between your machine and your Windows Azure account. However, I will cover some steps here as well.

First thing you need to do is download your publish settings file. There is a handy cmdlet to do this for you: Get-AzurePublishSettingsFile. By running this command, it will go to your account and create a management certificate for your account and download the necessary publish settings file.

Next step is to import this publish settings file through another handy cmdlet: Import-AzurePublishSettingsFile <publishsettings-file-path>. This command is actually setting up lots of stuff on your machine.

  • Under "%appdata%\Windows Azure Powershell", it creates necessary configuration files for the cmdlets to get the authentication information.

image

  • These configuration files don't actually contain certificate information on its own; they just hold the thumbprint of our management certificate and your subscription id.
  • Actual certificate is imported inside your certificate store. You can view the installed certificate by running "dir Cert:\CurrentUser\My" command.

image

Now you are ready to go. Run "Get-AzureSubscription" command to see your subscription details and you will see that it's set as your default subscription. So, from now on, you don't need to do anything with your subscription. You can just run the commands without worrying about your credentials (of course, this maybe a good or bad thing; depends on your situation). For example, I ran the Get-AzureVM command to view my VMs:

image

So, where is my stuff?

We installed the stuff and we just saw that it's working. So, where did all the stuff go and how does this thing even work? Well, if you know PowerShell, you also know that modules are stored under specific folders. You can view these folders by running the '$env:PSModulePath.split(';')' command:

image

Notice that there is a path for Windows Azure PowerShell Cmdlets, too. Without knowing this stuff, we could also view the module definition and get to its location from there:

Get-Module -ListAvailable -Name Azure

image

"C:\Program Files (x86)\Microsoft SDKs\Windows Azure\PowerShell\Azure" directory is where you will find the real meat:

image

On the other hand, when we imported the publish settings, it puts a few stuff about my subscription under "%appdata%\Windows Azure Powershell". The certificate is also installed under my certificate store as mentioned before.

Clean Up

When you start managing your Windows Azure services through PowerShell Cmdlets, you have your Windows Azure account information and management certificate information at various places on your computer. Even if you uninstall your Windows Azure PowerShell Cmdlets from your machine, you are not basically cleaning up everything. Let's start by uninstalling the Cmdlets from your computer.

Simply go to Control Panel > Programs > Programs and Features and find the installed program named as Windows Azure PowerShell and uninstall it. You will be done.

image

Next step is to go to "%appdata%\Windows Azure Powershell" directory and delete the folder completely. One more step to go now: delete your certificate. Find out what the thumbprint of your certificate is:

image

Then, run the Remove-Item command to remove the certificate:

Remove-Item Cert:\CurrentUser\My\507DAAF6F285C4A72A45909ACCEE552B4E2AE916 –DeleteKey

You are all done uninstalling Windows Azure PowerShell Cmdlets. Remember, Windows Azure is powerful but it's more powerful when you manage it through PowerShell Smile

References

SignalR and Real-time Web Application Scenarios Webcast Recording (In Turkish) is Available

A few days ago, I presented on a webcast about ASP.NET SignalR and real-time web application scenarios in Turkish and its recording is now available.
2013-04-22 02:52
Tugberk Ugurlu


A few days ago, I presented on a webcast about ASP.NET SignalR and real-time web application scenarios in Turkish and it went pretty great I think. I also managed to record the webcast successfully and put it on Vimeo.

SignalR ve Realtime Web Uygulama Senaryoları from Tugberk Ugurlu on Vimeo.

Hope you'll like it Smile