Getting Started with ASP.NET vNext by Setting Up the Environment From Scratch

In this post, I'll walk you through how you can set up your environment from scratch to get going with ASP.NET vNext.
2014-09-28 19:08
Tugberk Ugurlu


I'm guessing that you already heard the news about ASP.NET vNext.  It has been announced publicly a few months back at TechEd North America 2014 and it's being rewritten from the ground up which means: "Say goodbye to our dear System.Web.dll" :) No kidding, I'm pretty serious :) It brings lots of improvements which will take the application and development performance to the next level for us, .NET developers. ASP.NET vNext is coming so hard and there are already good amount of resources for you to dig into. If you haven't done this yet, navigate to at the bottom of this post to go through the links under Resources.  However, I strongly suggest you to check Daniel Roth’s talk on ASP.NET vNext at TechEd New Zealand 2014 first which is probably the best introduction talk on ASP.NET vNext.

What I would like to go through in this post is how you can set up your environment from scratch to get going with ASP.NET vNext. I also tnhink that this post is useful for understanding key concepts behind this new development environment.

First of all, you need to install kvm which stands for k Version Manager. You can install kvm by running the below command on your command prompt in Windows.

@powershell -NoProfile -ExecutionPolicy unrestricted -Command "iex ((new-object net.webclient).DownloadString('https://raw.githubusercontent.com/aspnet/Home/master/kvminstall.ps1'))"

This will invoke an elevated PowerShell command prompt and installs a few stuff on your machine. Actually, all the things that this command installs are under %userprofile%\.kre directory.

image

Now, install the latest available KRuntime environment. You will do this by running kvm upgrade command:

image

The latest version is installed from the default feed (which is https://www.myget.org/F/aspnetmaster/api/v2 at the time of this writing). We can verify that the K environment is really installed by running kvm list which will list installed k environments with their associated information.

image

Here, we only have good old desktop CLR. If we want to work against CoreCLR (a.k.a K10), we should install it using the –svrc50 switch:

image

You can switch between versions using the "kvm use" command. You can also use –p switch to persist your choice of the runtime and that would allow you to specify your default runtime choice which will live between processes.

image

Our system is ready for ASP.NET vNext development. I have a tiny working AngularJS application that you can find here in my GitHub repository. This was a pure HTML + CSS + JavaScript web application which required no backend system. However, at some point I needed some backend functionality. So, I integrated with ASP.NET vNext. Here is how I did it:

First, we need to specify the NuGet feeds that we will be using for our application. Go ahead and place the the following content into NuGet.config file inside the root of your solution:

<?xml version="1.0" encoding="utf-8"?>
<configuration>
  <packageSources>
      <clear />
      <add key="AspNetVNext" value="https://www.myget.org/F/aspnetrelease/api/v2" />
      <add key="NuGet.org" value="https://nuget.org/api/v2/" />
  </packageSources>
</configuration>

Later, have the project.json inside your application directory. This will have your dependencies and commands. It can contain more but we will just have those for this post:

{
    "dependencies": {
        "Kestrel": "1.0.0-alpha3",
        "Microsoft.AspNet.Diagnostics": "1.0.0-alpha3",
        "Microsoft.AspNet.Hosting": "1.0.0-alpha3",
        "Microsoft.AspNet.Mvc": "6.0.0-alpha3",
        "Microsoft.AspNet.Server.WebListener": "1.0.0-alpha3"
    },
    
    "commands": { 
        "web": "Microsoft.AspNet.Hosting --server Microsoft.AspNet.Server.WebListener --server.urls http://localhost:5001",
        "kestrel": "Microsoft.AspNet.Hosting --server Kestrel --server.urls http://localhost:5004"
    },
    "frameworks": {
        "net45": {},
        "k10": {}
    }
}

With the runtime instillation, you can have a few things and one of them is kpm tool which allows you to manage your packages. You can think of this as NuGet (indeed, it uses NuGet behind the scenes). However, it knows how to read your project.json file and installs the packages according to that. If you call kpm now, you can see the options it give you:

As you have the project.json ready, you can now run kpm restore:

image

Note that the restore output is a little different if you are using an alpha4 release:

image

Based on the commands available in your project.json file, you can run the command to fire up your application now.

image

Also, you can run "set KRE_TRACE=1" before running your command to see diagnostic details about the process if you need to:

image

My little app is now running:

image

Resources

Quickly Hosting Static Files In Your Development Environment with Node http-server

Yesterday, I was looking for something to have a really quick test space on my machine to play with AngularJS and I found http-server: a simple zero-configuration command-line http server.
2014-09-28 13:19
Tugberk Ugurlu


Yesterday, I was looking into AngularJS which is a long overdue for me. I wanted to have a really quick test space on my machine to play with AngularJS. I could just go with Plunker or JSFiddle but I wasn’t in the mood for an online editor.

So, I first installed Bower and then I installed a few libraries to get me going. I made sure to save them inside my bower.json file, too:

image

bower install angular -S
bower install bootstrap -S
bower install underscore –S

image

Then, I installed my node module to help me automate a few processes with gulp.

BTW, do you know VS has support for running you gulp and grunt tasks? Check out Scott Hanselman’s post if you don’t: Introducing Gulp, Grunt, Bower, and npm support for Visual Studio

Finally, I have written a few lines of code to get me going. The state the this tiny app can be found under my GtHub account. I was at the point where I would like to see it working in my browser:

image

As a developer who have been heavily working with .NET for a while, I wanted to hit F5 :) Jokes aside, what I only want here is something so simple that would host my static files inside my directory with no configuration required. While searching a great option for this, I found http-server: a simple zero-configuration command-line http server.

image

I simply installed it as an global npm module:

image

All done! It’s now under my path and I can simply cd into my root web site directory and run http-server there.

image

Super simple! It also observes the changes. So, you can modify your files as the http-server serves them. You can even combine this with LiveReload + a gulp task.

Elasticsearch Installation and a Few Core Concepts

So, I have been having my way with Elasticsearch for a few weeks now and it's time for me to blog about it :) In this post, I will only highlight a few things here which were useful to me at the beginning.
2014-09-25 15:01
Tugberk Ugurlu


So, I have been having my way with Elasticsearch for a few weeks now and it's time for me to blog about it :) I hear what you say :) This is yet another 101 blog post but settle down :) I have a few selfish reasons here. Blogging on a new technology is a way for me to grasp it better. Also, during this time, I have been also looking into Azure Search, an hosted search service solution by Microsoft. You should check this service out, too if what you want is to have an easily scalable hosted search service solution.

OK, what were we talking about? Oh yes, Elasticsearch :) It's awesome! I mean it, seriously. If you haven't looked at it, you should check it out. It's a search product which makes data search and analytics very easy. It's built on top of famous search engine library Apache Lucene. Elasticsearch also has a great documentation. So, I will only highlight a few things here which were useful to me at the beginning.

Setting it Up

To get started with Elasticsearch, you can download it from here. At the time of this writing, version 1.3.2 was the latest version. What you need to do is fairly simple: download the zip file, extract it, navigate to bin directory and run Elasticsearch from the command line.

1-Screenshot 2014-09-25 13.20.48

As we didn't specify any configuration values, Elasticsearch started with the configuration defined inside the /config/elasticsearch.yml file. If you didn't touched that file either, it will be exposed through localhost:9200. Elasticsearch server is exposed to the World through its HTTP API and when you send a GET request to the root level, you will get the server info:

2-Screenshot 2014-09-25 13.23.47

You can also see inside the response that the node even assigned itself a random name: Mole Man in this case. You can start working with this elasticsearch node using your choice of an HTTP client but I really recommend installing Marvel which ships with a developer console that allows you to easily issue calls to Elasticsearch’s HTTP API. To install marvel, you need to run the following command which you are under the elasticsearch/bin path:

plugin -i elasticsearch/marvel/latest

After you are done with the installation, you should restart the server and from there, you can navigate to localhost:9200/_plugin/marvel/sense/index.html on your favorite browser:

3-Screenshot 2014-09-25 13.43.02

Now, you can start issuing request to you Elasticsearch server right from your browser:

4-Screenshot 2014-09-25 13.47.13

The best part of this plugin is its autocomplete support while you are construction queries:

5-Screenshot 2014-09-25 13.49.29

It can even understand and provide you the options for your type fields which is pretty useful. Before going deeper, let's learn e few fundamentals about Elasticsearch.

Basics 101

When working with Elasticsearch, you will come across a few core concepts very often. I think knowing what they are beforehand will help you along the way.

Index is the container for all your documents. Each document inside your index will have a few shared properties. One of those properties is type. Each document inside your index will have a type and each type will have a defined mapping (sort of a schema). The type and its mapping is not required to be defined upfront. Elasticsearch can create the type and its mapping dynamically. However, there are lots of useful things you can achieve with altering the default mapping such as changing the analyzer for a field.

At first glance, it seems that we can only search on types but that's not the case actually. You can even search the entire Elasticsearch node if you want:

6-Screenshot 2014-09-25 14.08.02

I have a few indexes created here and those indexes have documents in one or multiple different types. When I search for a word "madrid"  here, I got 568 hits throughout the node. I have some hits as movie type from movies-test index and some hits as status type from my_twitter_river index. This could be pretty powerful depending on your scenario if you embrace Elasticsearch’s schema free nature. We can also search on one index and that would give us the ability to search for the types under that particular index.

I mentioned that each type has its mapping. You can see the mapping of a type by sending a GET request to /{index}/{type}/_mapping:

7-Screenshot 2014-09-25 17.23.17

Twitter River Plugin for Elasticsearch

If you are just getting started with Elasticsearch, what I recommend for you is to get some data in using the Twitter River Plugin for ElasticSearch. After you configure and run it for a few hours, you will have insane amount of data ready to play with. 

8-Screenshot 2014-09-25 17.28.10

Resources

Using Azure Storage Emulator Command-Line Tool: WAStorageEmulator.exe

Starting from version 3.0 of the emulator, a few things have changed and lots of people are not aware of this. When you launch the Storage Emulator now, you will see a command prompt pop up. I wanted to write this short blog post to just to give you a head start.
2014-09-15 09:28
Tugberk Ugurlu


When you download the Azure SDK for Visual Studio, it brings down bunch of stuff for you such as Azure Storage and Compute Emulator. With a worker or web role project in Visual Studio, we can get the both emulators up and running by simply firing up the project. However, if we are not working with a web or worker role, we need a way to fire up the storage emulator by ourselves and it is actually pretty easy. Starting from version 3.0 of the emulator, a few things have changed and lots of people are not aware of this. I wanted to write this short blog post to just to give you a head start.

When you launch the Storage Emulator now, you will see a command prompt pop up.

Screenshot (13)

image

This is WAStorageEmulator.exe and it is the storage emulator command line tool which allows you perform bunch of operations such as starting/stopping the emulator and querying the status of the emulator. You can either run this command prompt as I did above or you can navigate to c:\Program Files(x86)\Microsoft SDKs\Azure\Storage Emulator\ directory and find WAStorageEmulator.exe there. you can read up on Storage Emulator Command-Line Tool Reference on MSDN to find out what commands are available. What I would like to point out is the fact that you can now run the emulator inprocess through the command prompt which is quite nice:

image

The other thing is that you can now quite easily get the storage emulator up and running on your integration tests. You can even reset the whole storage account at the beginning of your test, start it and stop it at the end. Check out Using the Azure Storage Emulator for Development and Testing section on MSDN for further details.

A Gentle Introduction to Azure Search

Microsoft Azure team released Azure Search as a preview product a few days ago, an hosted search service solution by Microsoft. Azure Search is a suitable product if you are dealing with high volume of data (millions of records) and want to have efficient, complex and clever search on those chunk of data. In this post, I will try to lay out some fundamentals about this service with a very high level introduction.
2014-09-10 12:02
Tugberk Ugurlu


With many of the applications we build as software developers, we need our data to be exposed and we want that data to be in an easy reach so that the user of the application can find what they are looking for easily. This task is especially tricky if you have high amount of data (millions, even billions) in your system. At that point, the application needs to give user a great and flawless experience so that the user can filter down the results based on what they are actually looking for. Don't we have solutions to address this problems? Of course, we do and solutions such as Elasticsearch and Apache Solr are top notch problem solvers for this matter. However, hosting these products on your environment and making them scalable is completely another job.

To address these problems, Microsoft Azure team released Azure Search as a preview product a few days ago, an hosted search service solution by Microsoft. Azure Search is a suitable product if you are dealing with high volume of data (millions of records) and want to have efficient, complex and clever search on those chunk of data. If you have worked with a search engine product (such as Elasticsearch, Apache Solr, etc.) before, you will be too much comfortable with Azure Search as it has some many similar features. In fact, Azure Search is on top of Elasticsearch to provide its full-text search function. However, you shouldn't see this brand-new product as hosted Elasticsearch service on Azure because it has its completely different public interface.

In this post, I will try to lay out some fundamentals about this service with a very high level introduction. I’m hoping that it’s also going to be a starting point for me on Azure Search blog posts :)

Getting to Know Azure Search Service

When I look at Azure Search service, I see it as four pieces which gives us the whole experience:

  • Search Service
  • Search Unit
  • Index
  • Document

Search service is the highest level of the hierarchy and it contains Provisioned search unit(s). Also, a few concepts are targeting the search service such as authentication and scaling.

Search units allow for scaling of QPS (Queries per second), Document Count and Document Size. This also means that search units are the key concept for high availability and throughput. As a side note, high availability requires at least 3 replicas for the preview.

Index is the holder for a collection of documents based on a defined schema which specifies the capabilities of the Index (we will touch on this schema later). A search service can contain multiple indexes.

Lastly, Document is the actual holder for the data, based on the index schema, which the document itself lives in. A document has a key and this key needs to be unique within the index. A document also has fields to represent the data. Fields of a document contain attributes and those attributes define the capabilities of the field such as whether it can be used to filter the results, etc. Also note that number of documents an index can contain is limited based on the search units the service has.

Windows Azure Portal Experience

Let's first have a look at the portal experience and how we can get a search service ready for our use. Azure Search is not available through the current Microsoft Azure portal. It's only available through the preview portal. Inside the new portal, click the big plus sign at the bottom left and then click "Everything".

Screenshot 2014-09-06 11.55.45

This is going to get you to "Gallery". From there click "Data, storage, cache + backup" and then click "Search" from the new section.

Screenshot 2014-09-06 11.59.16

You will have a nice intro about the Microsoft Azure Search service within the new window. Hit "Create" there.

Keep in mind that service name must only contain lowercase letters, digits or dashes, cannot use dash as the first two or last one characters, cannot contain consecutive dashes, and is limited between 2 and 15 characters in length. Other naming conventions about the service has been laid out here under Naming Conventions section.

When you come to selecting the Pricing Tier, it's time to make a decision about your usage scenario.

Screenshot 2014-09-06 12.06.52

Now, there two options: Standard and Free. Free one should be considered as the sandbox experience because it's too limiting in terms of both performance and storage space. You shouldn't try to evaluate the Azure Search service with the free tier. It's, however, great for evaluating the HTTP API. You can create a free service and use this service to run your HTTP requests against.

The standard tier is the one you would like to choose for production use. It can be scaled both in terms of QPS (Queries per Second) and document size through shards and replicas. Head to "Configure Search in the Azure Preview portal" article for more in depth information about scaling.

When you are done setting up your service, you can now get the admin key or the query key from the portal and start hitting the Azure Search HTTP (or REST, if you want to call it that) API.

Azure Search HTTP API

Azure Search service is managed through its HTTP API and it's not hard to guess that even the Azure Portal is using its API to manage the service. It's a lightweight API which understands JSON as the content type. When we look at it, we can divide this HTTP API into three parts:

Index Management part of the API allows us managing the indexes with various operations such as creating, deleting and listing the indexes. It also allow us to see some index statistics. Creating the index is probably going to be the first operation you will perform and it has the following structure:

POST https://{search-service-name}.search.windows.net/indexes?api-version=2014-07-31-Preview HTTP/1.1
User-Agent: Fiddler
api-key: {your-api-key}
Content-Type: application/json
Host:{search-service-name}.search.windows.net

{
	"name": "employees",
	"fields": [{
		"name": "employeeId",
		"type": "Edm.String",
		"key": true,
		"searchable": false
	},
	{
		"name": "firstName",
		"type": "Edm.String"
	},
	{
		"name": "lastName",
		"type": "Edm.String"
	},
	{
		"name": "age",
		"type": "Edm.Int32"
	},
	{
		"name": "about",
		"type": "Edm.String",
		"filterable": false,
		"facetable": false
	},
	{
		"name": "interests",
		"type": "Collection(Edm.String)"
	}]
}

With the above request, you can also spot a few more things which are applied to every API call we make. There is a header we are sending with the request: api-key. This is where you are supposed to put your api-key. Also, we are passing the API version through a query string parameter called api-version. Have a look at the Azure Search REST API MSDN documentation for further detailed information.

With this request, we are specifying the schema of the index. Keep in mind that schema updates are limited at the time of this writing. Although existing fields cannot be changed or deleted, new fields can be added at any time. When a new field is added, all existing documents in the index will automatically have a null value for that field. No additional storage space will be consumed until new documents are added to the index. Have a look at the Update Index API documentation for further information on index schema update.

After you have your index schema defined, you can now start populating your index with Index Population API. Index Population API is a little bit different and I honestly don’t like it (I have a feeling that Darrel Miller won’t like it, too :)). The reason why I don’t like it is the way we define the operation. With this HTTP API, we can add new document, update and remove an existing one. However, we are defining the type of the operation inside the request body which is so weird if you ask me. The other weird thing about this API is that you can send multiple operations in one HTTP request by putting them inside a JSON array. The important fact here is that those operations don’t run in transaction which means that some of them may succeed and some of them may fail. So, how do we know which one actually failed? The response will contain a JSON array indicating each operation’s status. Nothing wrong with that but why do we reinvent the World? :) I would be more happy to send batch request using the multipart content-type. Anyway, enough bitching about the API :) Here is a sample request to add a new document to the index:

POST https://{search-service-name}.search.windows.net/indexes/employees/docs/index?api-version=2014-07-31-Preview HTTP/1.1
User-Agent: Fiddler
api-key: {your-api-key}
Content-Type: application/json
Host: {search-service-name}.search.windows.net

{
	"value": [{
		"@search.action": "upload",
		"employeeId": "1",
		"firstName": "Jane",
		"lastName": "Smith",
		"age": 32,
		"about": "I like to collect rock albums",
		"interests": ["music"]
	}]
}

As said, you can send the operations in batch:

POST https://{search-service-name}.search.windows.net/indexes/employees/docs/index?api-version=2014-07-31-Preview HTTP/1.1
User-Agent: Fiddler
api-key: {your-api-key}
Content-Type: application/json
Host: {search-service-name}.search.windows.net

{
	"value": [{
		"@search.action": "upload",
		"employeeId": "2",
		"firstName": "Douglas",
		"lastName": "Fir",
		"age": 35,
		"about": "I like to build cabinets",
		"interests": ["forestry"]
	},
	{
		"@search.action": "upload",
		"employeeId": "3",
		"firstName": "John",
		"lastName": "Fir",
		"age": 25,
		"about": "I love to go rock climbing",
		"interests": ["sports", "music"]
	}]
}

Check out the great documentation about index population API to learn about it more.

Lastly, there are query and lookup APIs where you can use OData 4.0 expression syntax to define your query. Go and check out its documentation as well.

Even if the service is so new, there are already great things happening around it. Sandrino Di Mattia has two cool open source projects on Azure Search. One is RedDog.Search .NET Client and the other is the RedDog Search Portal which is a web based UI tool to manage your Azure Search service. The other one is from Richard Astbury: Azure Search node.js / JavaScript client. I strongly encourage you to check them out. There are also two great video presentations about Azure Search by Liam Cavanagh, a Senior Program Manager in the Azure Data Platform Incubation team at Microsoft.

Stop what you are doing and go watch them if you care about Azure Search. It will give you a nice overview about the product and those videos could be your starting point.

You can also view my talk on AzureConf 2014 about Azure Search: