Sorted By: Tag (azure)

Pulling an Old Article From the Coffin: SignalR with Redis Running on a Windows Azure Virtual Machine

Long time ago (about 5 years, at least), I contributed an article to SignalR wiki about scaling SignalR with Redis. You can still find the article here. I also blogged about it here. However, over time, pictures got lost there. I got a few requests from my readers to refresh those images and I was luckily able to find them :) I decided to publish that article here so that I would have a much better control over the content.
2018-08-08 14:32
Tugberk Ugurlu


Long time ago (about 5 years, at least), I contributed an article to SignalR wiki about scaling a SignalR application with Redis. You can still find the article here. I also blogged about it here. However, over time, pictures got lost there. I got a few requests from my readers to refresh those images and I was lucky enough to be able to find them :) I decided to publish that article here so that I would have a much better control over the content. So, here is the post :)

Please keep in mind that this is a really old post and lots of things have evolved since then. However, I do believe the concepts still resonate and it’s valuable to show the ways of how to achieve this within a cloud provider’s context.


SignalR with Redis Running on a Windows Azure Virtual Machine

This wiki article will walk your through on how you can run your SignalR application in multiple machines with Redis as your backplane using Windows Azure Virtual Machines for scale out scenarios.

Creating the Windows Azure Virtual Machines

First of all, we will spin up our virtual machines. What we want here is to have two Windows Server 2008 R2 virtual machines for our SignalR application and we will name them as Web1-08R2 and Web2-08R2. We will have the IIS installed on both of these servers and at the end, we will load balance the request on port 80.

Our third virtual machine will be another Windows Server 2008 R2 only for our Redis server. We will call this server Redis-08R2.

To spin up the VMs, go to new Windows Azure Management Portal and hit New icon at the bottom-right corner.

0-687474703a2f2f7475676265726b2e626c6f622e636f72652e77696e646f77732e6e65742f7475676265726b756775726c752d626c6f672f53696e67616c522d776974682d52656469732d52756e6e696e672d6f6e2d612d57696e646f77732

Creating a virtual machine running Windows Server 2008 R2 is explained here in details. We followed the same steps to create our first VM named Web1-08R2.

The second VM we will be creating has a slightly different approach than the first one. Under the hood, every virtual machine is a cloud service instance and we want to put our second VM (Web2-08R2) under the same cloud service that our first web VM is running under. To do that, we need to follow the same steps as explained inside the previously mentioned article but when we come to 3rd step in the creation wizard, we should chose Connect to existing Virtual Machine option this time and we should choose our first VM we have just created.

1-687474703a2f2f7475676265726b2e626c6f622e636f72652e77696e646f77732e6e65742f7475676265726b756775726c752d626c6f672f53696e67616c522d776974682d52656469732d52756e6e696e672d6f6e2d612d57696e646f77732d417a7572652d5

As the last step, we now need to create our redis VM which will be named Redis-08R2. We will follow the same steps as we did when we were creating our second web VM (Web2-08R2).

Setting Up Redis as a Windows Service

To use Redis on a Windows machine, we went to Redis on Windows prototype GitHub page and cloned the repository and followed the steps explained under How to build Redis using Visual Studio section.

After you build the project, you will have all the files you need under msvs\bin\release path as zip files. redisbin.zip file will contain the redis server, redis command line interface and some other stuff. rediswatcherbin.zip file will contain the msi file to install redis as a windows service. You can just copy those zip files to your Redis VM and extract redisbin.zip under c:\redis\bin. Then follow the steps:

  • Currently, there is a bug in the RedisWatcher installer and if you don't have Microsoft Visual C++ 2010 Redistributable Package installed on your machine, the service won't start. So, I installed it first.

  • Copy this redis.conf file and put it under c:\redis\bin directory. Open it up and add a password by adding the following line of code:

    requirepass 1234567

    Take this note into considiration when you are setting up your redis password:

    Warning: since Redis is pretty fast an outside user can try up to 150k passwords per second against a good box. This means that you should use a very strong password otherwise it will be very easy to break.

  • Then, extract the rediswatcherbin.zip somewhere and run the InstallWatcher.msito install the service.

  • Navigate to C:\Program Files (x86)\RedisWatcher directory. You will see a file named watcher.conf inside this directory. Open this file up and replace the entire file with the following text. Only difference here is that we are supplying the redis.conf file directory for the server to use:

    exepath c:\redis\bin
    exename redis-server.exe
    
    {
     workingdir c:\redis\inst1
     runmode hidden
     saveout 1
     cmdparms c:\redis\bin\redis.conf
    }
    
  • Create a folder named inst1 under c:\redis because we have specified this folder as working directory for our redis instance.

  • When you do a search against windows services in PowerShell, you will see RedisWatcherSvc service is installed.

2-687474703a2f2f7475676265726b2e626c6f622e636f72652e77696e646f77732e6e65742f7475676265726b756775726c752d626c6f672f53696e67616c522d776974682d52656469732d52756e6e696e672d6f6e2d612d57696e646f77732

  • Run the following PowerShell command to start the service for the first time.

    (Get-Service -Name RedisWatcherSvc).Start()
    

Now we have a Redis server running on our VM. To test if it is actually running, open up a windows command window under c:\redis\bin and run the following command (assuming you set your password 1234567):

redis-cli -h localhost -p 6379 -a 1234567

Now, you have a redis client running.

3-687474703a2f2f7475676265726b2e626c6f622e636f72652e77696e646f77732e6e65742f7475676265726b756775726c752d626c6f672f53696e67616c522d776974682d52656469732d52756e6e696e672d6f6e2d612d57696e646f77732

Ping the redis to see if you are really authenticated:

4-687474703a2f2f7475676265726b2e626c6f622e636f72652e77696e646f77732e6e65742f7475676265726b756775726c752d626c6f672f53696e67616c522d776974682d52656469732d52756e6e696e672d6f6e2d612d57696e646f77732

Now, we are nearly set. As a last step in our redis server, we need to open up TCP port 6379 for external communication. You can do this under Windows Firewall with Advanced Security window as explained here.

5-687474703a2f2f7475676265726b2e626c6f622e636f72652e77696e646f77732e6e65742f7475676265726b756775726c752d626c6f672f53696e67616c522d776974682d52656469732d52756e6e696e672d6f6e2d612d57696e646f77732

Communicating Through Internal Endpoints Between Windows Azure Virtual Machines Under Same Cloud Service

When you are inside one of your web VMs, you can simply look up the redis VM by hostname.

6-687474703a2f2f7475676265726b2e626c6f622e636f72652e77696e646f77732e6e65742f7475676265726b756775726c752d626c6f672f53696e67616c522d776974682d52656469732d52756e6e696e672d6f6e2d612d57696e646f77732

The hostname will resolve to DIP (Dynamic IP Address) which Windows Azure will use internally. We can configure public endpoints through Windows Azure Management Portal easily but in that case, we would be opening redis to the whole world. Also, if we communicate to our redis server through VIP (Virtual IP Address), we would always go through the load balancer which has its own additional cost.

So, we can easily connect to our redis server from any other connected VM by hostname.

The SignalR Application with Redis

Our SignalR application will not be that much different from a normal SignalR application thanks to SignalR.Redis project. All you need to do is to add the SignalR.Redis nuget package into your application and configure SignalR to use Redis as the message bus inside the Application_Start method in Global.asax.cs file:

protected void Application_Start(object sender, EventArgs e)
{
    // Hook up redis
    string server = ConfigurationManager.AppSettings["redis.server"];
    string port = ConfigurationManager.AppSettings["redis.port"];
    string password = ConfigurationManager.AppSettings["redis.password"];

    GlobalHost.DependencyResolver.UseRedis(server, Int32.Parse(port), password, "SignalR.Redis.Sample");
}

For our demo, the AppSettings should look like as below:

<appSettings>
    <add key="redis.server" value="Redis-08R2" />
    <add key="redis.port" value="6379" />
    <add key="redis.password" value="1234567" />
</appSettings>

I put the application under IIS on our both web servers (Web1-08R2 and Web2-08R2) and configured them to run under .NET Framework 4.0 integrated application pool.

For this demo, I am using the Redis.Sample chat application included inside the SignalR.Redis project.

Let's test them quickly before going public. I fired the both web applications inside the servers and here is the result:

7-687474703a2f2f7475676265726b2e626c6f622e636f72652e77696e646f77732e6e65742f7475676265726b756775726c752d626c6f672f53696e67616c522d776974682d52656469732d52756e6e696e672d6f6e2d612d57696e646f77732

Perfectly running! Let's open them up to the world.

Opening up the Port 80 and Load Balancing the Requets

Our requirement here is to make our application reachable over HTTP and at the same time, we want to load balance the request between our two web servers.

To do that, we need to go to Windows Azure Management portal and set up the TCP endpoints for port 80.

First, we navigate to dashboard of our Web1-08R2 VM and hit Endpoints from the dashboard menu:

8-687474703a2f2f7475676265726b2e626c6f622e636f72652e77696e646f77732e6e65742f7475676265726b756775726c752d626c6f672f53696e67616c522d776974682d52656469732d52756e6e696e672d6f6e2d612d57696e646f77732

From there, hit the End Endpoint icon at the bottom of the page:

9-687474703a2f2f7475676265726b2e626c6f622e636f72652e77696e646f77732e6e65742f7475676265726b756775726c752d626c6f672f53696e67616c522d776974682d52656469732d52756e6e696e672d6f6e2d612d57696e646f77732

A wizard is going to appear on the screen:

10-687474703a2f2f7475676265726b2e626c6f622e636f72652e77696e646f77732e6e65742f7475676265726b756775726c752d626c6f672f53696e67616c522d776974682d52656469732d52756e6e696e672d6f6e2d612d57696e646f77732

Click the right-arrow icon and go to next step which is the last one and we will enter the port details there:

11-687474703a2f2f7475676265726b2e626c6f622e636f72652e77696e646f77732e6e65742f7475676265726b756775726c752d626c6f672f53696e67616c522d776974682d52656469732d52756e6e696e672d6f6e2d612d57696e646f77732

After that, our endpoint will be created:

12-687474703a2f2f7475676265726b2e626c6f622e636f72652e77696e646f77732e6e65742f7475676265726b756775726c752d626c6f672f53696e67616c522d776974682d52656469732d52756e6e696e672d6f6e2d612d57696e646f77732

Follow the same steps of Web2-08R2 VM as well and open the Add Endpoint wizard. This time, we will be able to select Load-balance traffic on an existing port. Chose the previously created port and continue:

13-687474703a2f2f7475676265726b2e626c6f622e636f72652e77696e646f77732e6e65742f7475676265726b756775726c752d626c6f672f53696e67616c522d776974682d52656469732d52756e6e696e672d6f6e2d612d57696e646f77732

At the last step, enter the proper details and hit save:

14-687474703a2f2f7475676265726b2e626c6f622e636f72652e77696e646f77732e6e65742f7475676265726b756775726c752d626c6f672f53696e67616c522d776974682d52656469732d52756e6e696e672d6f6e2d612d57696e646f77732

We will see our new endpoint is being crated but this time Load Balanced column indicates Yes.

15-687474703a2f2f7475676265726b2e626c6f622e636f72652e77696e646f77732e6e65742f7475676265726b756775726c752d626c6f672f53696e67616c522d776974682d52656469732d52756e6e696e672d6f6e2d612d57696e646f77732

As we configured our web applications without a host name and they are exposed through port 80, we can directly run reach our application through the URL or Public Virtual IP Address (VIP) which is provided to us. When we run our application, we should see it running as below:

16-687474703a2f2f7475676265726b2e626c6f622e636f72652e77696e646f77732e6e65742f7475676265726b756775726c752d626c6f672f53696e67616c522d776974682d52656469732d52756e6e696e672d6f6e2d612d57696e646f77732

No matter which server it goes, the message will be broadcasted to every client because we will be using Redis as a message bus.

References

Microsoft Build 2016 in a Nutshell

Two weeks ago, I had an amazing opportunity to be at Microsoft Build Conference in San Francisco and I would like to share my experience about the conference with you in this post by highlighting what has happened and giving you my personal takeaways.
2016-04-09 18:59
Tugberk Ugurlu


Two weeks ago, I had an amazing opportunity to be at Microsoft Build Conference in San Francisco as an attendee thanks to my amazing company Redgate. The experience was truly unique and amount of people I have met there was huge. A bit late but I would like to share my experience about the conference with you in this post by highlighting what has happened and giving you my personal takeaways. You can also check out my tweets for the Build conference.

CeznGcRWAAE_QMw

Announcements

There were bunch of big and small announcements throughout the conference from Microsoft. Some of these were highlighted during two keynotes and some other announcements were spread to three days. I tried to capture all of them here but it's very likely I missed some of them (mostly the small ones):

Ce46IwXUsAA0Wil

Sessions

2016-03-31 15.28.14

Here is the list of sessions I have attended:

As much as I wanted to attend some other sessions, I missed some of them mostly due to clashes with other sessions. Luckily recordings for all Build 2016 sessions are available up on Channel 9. Here is my list of sessions to catch up:

There were also many good Channel 9 Live interviews. You can find them here. Here is a personal list of a few which are worth listening to:

IMG_1870

Personal Takeaways

All in all it has been a great conference and as stated, I am still catching up on the stuff that I have missed. Throughout the conference, I have picked up a few key points and I want to end the post with those:

  • I have seen more from Microsoft to make developers lives easier and more productive by enabling new tools (Bash on Ubuntu on Windows), supporting multiple platforms (Service Fabric to run on every environment including AWS, on-premises, Azure Stack and preview of Service Fabric on Linux), open sourcing more (some parts of Xamarin have gone open source) and making existing paid tools available for free (Xamarin is now free).
  • Microsoft is more focused on getting their existing services together and trying to give a cohesive ecosystem for developers. Service Fabric, Cognitive Services, Data Lake is a few examples of this.
  • .NET Core and CoreCLR is approaching to finalization for v1. After RC2, I don't suppose there will be much more features added or concepts changing.
  • I think this is the first time I have seen stabilization on client Apps story for Microsoft. Universal Windows Platform (UWP) was the focus on this area this year and it was the same on previous year.
  • I am absolutely happy to see Microsoft abandoning Windows Phone day by day. There was no direct sessions on it during the conference.
  • There were more steps towards making software to manage people's lives in a better way. Skype Bot Framework was one of these steps.
  • Microsoft (mostly Azure group) invests on IoT solutions heavily. Azure Functions and new updates on Azure IoT Suite are just a few signs of this.
  • Azure Resource Manager (ARM) and ARM templates are getting a lot of love from Microsoft and it's the way they push forward. They even build new services on Azure on top of this.

How Azure Web Apps Hosts an ASP.NET 5 Application

ASP.NET 5 application has totally a different directory structure when you try to publish it and it wasn't clear for me how Azure Web Apps is actually able to host an ASP.NET 5 application. If you are confused on this as well, the answer is here.
2015-04-12 10:13
Tugberk Ugurlu


I want to write this quick post because figuring out how an ASP.NET 5 application is hosted under Azure Web Apps was a big question for me. Some information is already there on this topic but the concept wasn’t crystal clear because when you look at the packed version of an ASP.NET 5 web application, it has the following structure on disk:

image

It will even get more interesting when you look inside the wwwroot folder:

image

We have the static files, bin folder which only contains AspNet.Loader.dll inside it and a web.config file. The most interesting bit here is the information inside the web.config file and this information will be read by Helios:

<?xml version="1.0" encoding="utf-8"?>
<configuration>
  <appSettings>
    <add key="bootstrapper-version" value="1.0.0-beta4-11526" />
    <add key="runtime-path" value="..\approot\packages" />
    <add key="dnx-version" value="" />
    <add key="dnx-clr" value="" />
    <add key="dnx-app-base" value="..\approot\src\ConfyConf.Client.Web" />
  </appSettings>
</configuration>

web.config file gives us enough evidence that the wwwroot is the directory that we need to point IIS to and then Helios will read these application settings information to figure out where the application actually is, where the dependencies and packages are, etc.. Let’s deploy the application using Visual Studio Publish feature. I created a brand new Azure Web App on the fly and hit publish:

image

When the deployment is completed, the web site is immediately up:

image

Let’s look at how the directory structure look like after the deployment:

image

Two interesting bits here are approot and wwwroot folders. The question here is that how Azure Web App knew to look into wwwroot folder. It was actually dead simple but it wasn’t obvious at the first glance. Before showing the answer, let’s have a look what IIS Express does to host an ASP.NET 5 application which will give us an hint on the answer.

I fired up the application through the Visual Studio to get IIS Express host my application. After the application is up, I dug into Task Manager to get the command line arguments for IIS Express:

iisexpress.exe    10736    Running    Tugberk    00     33,804 K    33    "C:\Program Files (x86)\IIS Express\iisexpress.exe"  /config:"C:\Users\Tugberk\Documents\IISExpress\config\applicationhost.config"  /site:"WebApplication10" /apppool:"Clr4IntegratedAppPool"    IIS Express Worker Process

This points us to applicationhost.config file and WebApplication10 site inside it. When you look at the site node for WebApplication10, you will see that some of the magic is actually happening there:

<site name="WebApplication10" id="77">
    <application path="/" applicationPool="Clr4IntegratedAppPool">
        <virtualDirectory path="/" physicalPath="D:\apps\WebApplication10\src\WebApplication10\wwwroot" />
    </application>
    <bindings>
        <binding protocol="http" bindingInformation="*:47112:localhost" />
    </bindings>
</site>

wwwroot is pointed as a virtual directory for the web application here for the root path. So, is this information helping us to see how Azure Web App is hosting our app? Absolutely! It gives us the information that something similar should be configured on Azure Web Apps side that it sees the wwwroot folder as the root. If you navigate to Configure section for your Azure Web App and scroll down to the bottom, you will see the virtual directory configuration there.

azure-aspnet5-virtual-directory

Clever! My guess is that this configuration was put there when I was publishing the Web Application through web deploy inside the Visual Studio. Digging into Web Publish Activity output could give us more information about when exactly this configuration is set.

Using Azure Storage Emulator Command-Line Tool: WAStorageEmulator.exe

Starting from version 3.0 of the emulator, a few things have changed and lots of people are not aware of this. When you launch the Storage Emulator now, you will see a command prompt pop up. I wanted to write this short blog post to just to give you a head start.
2014-09-15 09:28
Tugberk Ugurlu


When you download the Azure SDK for Visual Studio, it brings down bunch of stuff for you such as Azure Storage and Compute Emulator. With a worker or web role project in Visual Studio, we can get the both emulators up and running by simply firing up the project. However, if we are not working with a web or worker role, we need a way to fire up the storage emulator by ourselves and it is actually pretty easy. Starting from version 3.0 of the emulator, a few things have changed and lots of people are not aware of this. I wanted to write this short blog post to just to give you a head start.

When you launch the Storage Emulator now, you will see a command prompt pop up.

Screenshot (13)

image

This is WAStorageEmulator.exe and it is the storage emulator command line tool which allows you perform bunch of operations such as starting/stopping the emulator and querying the status of the emulator. You can either run this command prompt as I did above or you can navigate to c:\Program Files(x86)\Microsoft SDKs\Azure\Storage Emulator\ directory and find WAStorageEmulator.exe there. you can read up on Storage Emulator Command-Line Tool Reference on MSDN to find out what commands are available. What I would like to point out is the fact that you can now run the emulator inprocess through the command prompt which is quite nice:

image

The other thing is that you can now quite easily get the storage emulator up and running on your integration tests. You can even reset the whole storage account at the beginning of your test, start it and stop it at the end. Check out Using the Azure Storage Emulator for Development and Testing section on MSDN for further details.

A Gentle Introduction to Azure Search

Microsoft Azure team released Azure Search as a preview product a few days ago, an hosted search service solution by Microsoft. Azure Search is a suitable product if you are dealing with high volume of data (millions of records) and want to have efficient, complex and clever search on those chunk of data. In this post, I will try to lay out some fundamentals about this service with a very high level introduction.
2014-09-10 12:02
Tugberk Ugurlu


With many of the applications we build as software developers, we need our data to be exposed and we want that data to be in an easy reach so that the user of the application can find what they are looking for easily. This task is especially tricky if you have high amount of data (millions, even billions) in your system. At that point, the application needs to give user a great and flawless experience so that the user can filter down the results based on what they are actually looking for. Don't we have solutions to address this problems? Of course, we do and solutions such as Elasticsearch and Apache Solr are top notch problem solvers for this matter. However, hosting these products on your environment and making them scalable is completely another job.

To address these problems, Microsoft Azure team released Azure Search as a preview product a few days ago, an hosted search service solution by Microsoft. Azure Search is a suitable product if you are dealing with high volume of data (millions of records) and want to have efficient, complex and clever search on those chunk of data. If you have worked with a search engine product (such as Elasticsearch, Apache Solr, etc.) before, you will be too much comfortable with Azure Search as it has some many similar features. In fact, Azure Search is on top of Elasticsearch to provide its full-text search function. However, you shouldn't see this brand-new product as hosted Elasticsearch service on Azure because it has its completely different public interface.

In this post, I will try to lay out some fundamentals about this service with a very high level introduction. I’m hoping that it’s also going to be a starting point for me on Azure Search blog posts :)

Getting to Know Azure Search Service

When I look at Azure Search service, I see it as four pieces which gives us the whole experience:

  • Search Service
  • Search Unit
  • Index
  • Document

Search service is the highest level of the hierarchy and it contains Provisioned search unit(s). Also, a few concepts are targeting the search service such as authentication and scaling.

Search units allow for scaling of QPS (Queries per second), Document Count and Document Size. This also means that search units are the key concept for high availability and throughput. As a side note, high availability requires at least 3 replicas for the preview.

Index is the holder for a collection of documents based on a defined schema which specifies the capabilities of the Index (we will touch on this schema later). A search service can contain multiple indexes.

Lastly, Document is the actual holder for the data, based on the index schema, which the document itself lives in. A document has a key and this key needs to be unique within the index. A document also has fields to represent the data. Fields of a document contain attributes and those attributes define the capabilities of the field such as whether it can be used to filter the results, etc. Also note that number of documents an index can contain is limited based on the search units the service has.

Windows Azure Portal Experience

Let's first have a look at the portal experience and how we can get a search service ready for our use. Azure Search is not available through the current Microsoft Azure portal. It's only available through the preview portal. Inside the new portal, click the big plus sign at the bottom left and then click "Everything".

Screenshot 2014-09-06 11.55.45

This is going to get you to "Gallery". From there click "Data, storage, cache + backup" and then click "Search" from the new section.

Screenshot 2014-09-06 11.59.16

You will have a nice intro about the Microsoft Azure Search service within the new window. Hit "Create" there.

Keep in mind that service name must only contain lowercase letters, digits or dashes, cannot use dash as the first two or last one characters, cannot contain consecutive dashes, and is limited between 2 and 15 characters in length. Other naming conventions about the service has been laid out here under Naming Conventions section.

When you come to selecting the Pricing Tier, it's time to make a decision about your usage scenario.

Screenshot 2014-09-06 12.06.52

Now, there two options: Standard and Free. Free one should be considered as the sandbox experience because it's too limiting in terms of both performance and storage space. You shouldn't try to evaluate the Azure Search service with the free tier. It's, however, great for evaluating the HTTP API. You can create a free service and use this service to run your HTTP requests against.

The standard tier is the one you would like to choose for production use. It can be scaled both in terms of QPS (Queries per Second) and document size through shards and replicas. Head to "Configure Search in the Azure Preview portal" article for more in depth information about scaling.

When you are done setting up your service, you can now get the admin key or the query key from the portal and start hitting the Azure Search HTTP (or REST, if you want to call it that) API.

Azure Search HTTP API

Azure Search service is managed through its HTTP API and it's not hard to guess that even the Azure Portal is using its API to manage the service. It's a lightweight API which understands JSON as the content type. When we look at it, we can divide this HTTP API into three parts:

Index Management part of the API allows us managing the indexes with various operations such as creating, deleting and listing the indexes. It also allow us to see some index statistics. Creating the index is probably going to be the first operation you will perform and it has the following structure:

POST https://{search-service-name}.search.windows.net/indexes?api-version=2014-07-31-Preview HTTP/1.1
User-Agent: Fiddler
api-key: {your-api-key}
Content-Type: application/json
Host:{search-service-name}.search.windows.net

{
	"name": "employees",
	"fields": [{
		"name": "employeeId",
		"type": "Edm.String",
		"key": true,
		"searchable": false
	},
	{
		"name": "firstName",
		"type": "Edm.String"
	},
	{
		"name": "lastName",
		"type": "Edm.String"
	},
	{
		"name": "age",
		"type": "Edm.Int32"
	},
	{
		"name": "about",
		"type": "Edm.String",
		"filterable": false,
		"facetable": false
	},
	{
		"name": "interests",
		"type": "Collection(Edm.String)"
	}]
}

With the above request, you can also spot a few more things which are applied to every API call we make. There is a header we are sending with the request: api-key. This is where you are supposed to put your api-key. Also, we are passing the API version through a query string parameter called api-version. Have a look at the Azure Search REST API MSDN documentation for further detailed information.

With this request, we are specifying the schema of the index. Keep in mind that schema updates are limited at the time of this writing. Although existing fields cannot be changed or deleted, new fields can be added at any time. When a new field is added, all existing documents in the index will automatically have a null value for that field. No additional storage space will be consumed until new documents are added to the index. Have a look at the Update Index API documentation for further information on index schema update.

After you have your index schema defined, you can now start populating your index with Index Population API. Index Population API is a little bit different and I honestly don’t like it (I have a feeling that Darrel Miller won’t like it, too :)). The reason why I don’t like it is the way we define the operation. With this HTTP API, we can add new document, update and remove an existing one. However, we are defining the type of the operation inside the request body which is so weird if you ask me. The other weird thing about this API is that you can send multiple operations in one HTTP request by putting them inside a JSON array. The important fact here is that those operations don’t run in transaction which means that some of them may succeed and some of them may fail. So, how do we know which one actually failed? The response will contain a JSON array indicating each operation’s status. Nothing wrong with that but why do we reinvent the World? :) I would be more happy to send batch request using the multipart content-type. Anyway, enough bitching about the API :) Here is a sample request to add a new document to the index:

POST https://{search-service-name}.search.windows.net/indexes/employees/docs/index?api-version=2014-07-31-Preview HTTP/1.1
User-Agent: Fiddler
api-key: {your-api-key}
Content-Type: application/json
Host: {search-service-name}.search.windows.net

{
	"value": [{
		"@search.action": "upload",
		"employeeId": "1",
		"firstName": "Jane",
		"lastName": "Smith",
		"age": 32,
		"about": "I like to collect rock albums",
		"interests": ["music"]
	}]
}

As said, you can send the operations in batch:

POST https://{search-service-name}.search.windows.net/indexes/employees/docs/index?api-version=2014-07-31-Preview HTTP/1.1
User-Agent: Fiddler
api-key: {your-api-key}
Content-Type: application/json
Host: {search-service-name}.search.windows.net

{
	"value": [{
		"@search.action": "upload",
		"employeeId": "2",
		"firstName": "Douglas",
		"lastName": "Fir",
		"age": 35,
		"about": "I like to build cabinets",
		"interests": ["forestry"]
	},
	{
		"@search.action": "upload",
		"employeeId": "3",
		"firstName": "John",
		"lastName": "Fir",
		"age": 25,
		"about": "I love to go rock climbing",
		"interests": ["sports", "music"]
	}]
}

Check out the great documentation about index population API to learn about it more.

Lastly, there are query and lookup APIs where you can use OData 4.0 expression syntax to define your query. Go and check out its documentation as well.

Even if the service is so new, there are already great things happening around it. Sandrino Di Mattia has two cool open source projects on Azure Search. One is RedDog.Search .NET Client and the other is the RedDog Search Portal which is a web based UI tool to manage your Azure Search service. The other one is from Richard Astbury: Azure Search node.js / JavaScript client. I strongly encourage you to check them out. There are also two great video presentations about Azure Search by Liam Cavanagh, a Senior Program Manager in the Azure Data Platform Incubation team at Microsoft.

Stop what you are doing and go watch them if you care about Azure Search. It will give you a nice overview about the product and those videos could be your starting point.

You can also view my talk on AzureConf 2014 about Azure Search:

Tags