How Azure Web Apps Hosts an ASP.NET 5 Application

ASP.NET 5 application has totally a different directory structure when you try to publish it and it wasn't clear for me how Azure Web Apps is actually able to host an ASP.NET 5 application. If you are confused on this as well, the answer is here.
2015-04-12 10:13
Tugberk Ugurlu

I want to write this quick post because figuring out how an ASP.NET 5 application is hosted under Azure Web Apps was a big question for me. Some information is already there on this topic but the concept wasn’t crystal clear because when you look at the packed version of an ASP.NET 5 web application, it has the following structure on disk:


It will even get more interesting when you look inside the wwwroot folder:


We have the static files, bin folder which only contains AspNet.Loader.dll inside it and a web.config file. The most interesting bit here is the information inside the web.config file and this information will be read by Helios:

<?xml version="1.0" encoding="utf-8"?>
    <add key="bootstrapper-version" value="1.0.0-beta4-11526" />
    <add key="runtime-path" value="..\approot\packages" />
    <add key="dnx-version" value="" />
    <add key="dnx-clr" value="" />
    <add key="dnx-app-base" value="..\approot\src\ConfyConf.Client.Web" />

web.config file gives us enough evidence that the wwwroot is the directory that we need to point IIS to and then Helios will read these application settings information to figure out where the application actually is, where the dependencies and packages are, etc.. Let’s deploy the application using Visual Studio Publish feature. I created a brand new Azure Web App on the fly and hit publish:


When the deployment is completed, the web site is immediately up:


Let’s look at how the directory structure look like after the deployment:


Two interesting bits here are approot and wwwroot folders. The question here is that how Azure Web App knew to look into wwwroot folder. It was actually dead simple but it wasn’t obvious at the first glance. Before showing the answer, let’s have a look what IIS Express does to host an ASP.NET 5 application which will give us an hint on the answer.

I fired up the application through the Visual Studio to get IIS Express host my application. After the application is up, I dug into Task Manager to get the command line arguments for IIS Express:

iisexpress.exe    10736    Running    Tugberk    00     33,804 K    33    "C:\Program Files (x86)\IIS Express\iisexpress.exe"  /config:"C:\Users\Tugberk\Documents\IISExpress\config\applicationhost.config"  /site:"WebApplication10" /apppool:"Clr4IntegratedAppPool"    IIS Express Worker Process

This points us to applicationhost.config file and WebApplication10 site inside it. When you look at the site node for WebApplication10, you will see that some of the magic is actually happening there:

<site name="WebApplication10" id="77">
    <application path="/" applicationPool="Clr4IntegratedAppPool">
        <virtualDirectory path="/" physicalPath="D:\apps\WebApplication10\src\WebApplication10\wwwroot" />
        <binding protocol="http" bindingInformation="*:47112:localhost" />

wwwroot is pointed as a virtual directory for the web application here for the root path. So, is this information helping us to see how Azure Web App is hosting our app? Absolutely! It gives us the information that something similar should be configured on Azure Web Apps side that it sees the wwwroot folder as the root. If you navigate to Configure section for your Azure Web App and scroll down to the bottom, you will see the virtual directory configuration there.


Clever! My guess is that this configuration was put there when I was publishing the Web Application through web deploy inside the Visual Studio. Digging into Web Publish Activity output could give us more information about when exactly this configuration is set.

How to Use Octopus Deploy Step Templates for SQL Release

In this post, I will go through how you can use SQL Release Octopus Deploy step templates to make the Octopus Deploy Integration of SQL Release easier by going through one of the deployment flows.
2015-04-06 21:39
Tugberk Ugurlu

SQL Release, a set of PowerShell cmdlets from Redgate which automate deploying changes to your production databases, went out of beta and became part of DLM Automation Suite a few days ago. As part of this release, Octopus Deploy step templates for SQL Release are also included inside the suite and in this post, I will go through how you can use these step templates to make the Octopus Deploy Integration of SQL Release easier by going through one of the deployment flows (the recommended one).

If you are trying to integrate your SQL Server databases into your deployment pipeline, I strongly encourage you to try DLM Automation Suite out. At the time of this writing, it has 28-day free trial option. You can also check out the documentation page for DLM Automation Suite documentation and information about included products.


Installing SQL Release Step Templates

If you are not familiar with Step Templates, it’s a plugin mechanism that Octopus Deploy has which allows you to get the input from the user through a nice UI and run specific PowerShell script based on the input passed in. A step template is nothing but a structured JSON text and they are hosted inside the Octopus Deploy Step Templates library. They actually don’t need to be hosted there in order for you to use them but it makes it very convenient to find one in a central place.

SQL Release has four step templates to satisfy different flows and use cases.


First thing to do in my demo blog post here is to install these step templates into your Octopus server. The way to install a step template is a little different that you might expect.

  1. Go inside a step template page on Octopus Deploy Library web site.
  2. Hit "Copy to clipboard" button on the right hand side.
  3. Go to your Octopus server and navigate to Library (on the top menu)
  4. You should see the "Step templates" pane on the left side. This will open up the step templates page for your Octopus server.
  5. On that page, hit "Import", paste the step template inside the text area and click "Import".

You will end up with a look similar to the following one:


After I imported all the step templates for SQL Release, it is time to actually create the deployment process. We will be using only two of these in our example here.

In order to use the SQL Release step templates, you need to have SQL Release installed inside the Octopus Tentacle machine. If you install SQL Release while the Tentacle is running, you need to restart the Tentacle service (through the Tentacle Manager, for example).

Setting up the Octopus Project

Before going through each step of the deployment process, I wanted first show the end look of the Octopus project.


At the end, we will have four steps to complete our deployment for this example. A few more things to point out:

  • We will be only deploying the database for the purpose of this demo but you can imagine having your application deployment here as well.
  • We will deploy the database schema changes in two steps in order to allow review and approval of the script.

Download and Extract NuGet Package Step

First step will be to download and extract the NuGet package which contains the scripts folder for the database schema state. This NuGet package will be produced by another DLM Automation Suite tool named SQL CI, a plugin for your CI tool that allows continuous integration for SQL Server databases. The script folder I mentioned here can be produced by a few Redgate tools such as SQL Source Control. The script folder I am using for this sample is hosted on my GitHub repository.

In order to create a package, I fired the following SQL CI command:

sqlci Build --scriptsFolder="D:\github\Geveze\db" --outputFolder="D:\github\Geveze" --packageId="Geveze" --packageVersion="1.0.0"


When I have the package created, I pushed it to Octopus Deploy NuGet feed using NuGet Command Line tool:

nuget push Geveze.1.0.0.nupkg -ApiKey API-CMGMYZ1GM95FHJNLWVRQQGQRAPK -Source http://localhost:4000/nuget/packages

Typically, these steps would be performed inside your CI Server but I didn’t want to have CI integration in order not to complicate things more for this demo. Also, check out the SQL CI Documentation for more information about the command line options and other related stuff. We won’t go into details about this tool in this post.

Once I pushed the package to Octopus Deploy NuGet feed, I was able to see the package while I was configuring the step:


Create Database Release Step

This is probably the most important step in our process. Here, SQL Release will create the actual changes script and bunch of other artifacts that can be used later. These will be generated based on the package it will obtain from the previous package and the target database which will be used for compression.

In order to add a "Create Database Release Step", you need to hit "Add step" on the Project > Process page. From the "Choose step type" window, choose "Redgate - Create Database Release" option.


The step configuration will look something like below:


As you can see, I am using Octopus Deploy variables here. The ones that I have are as shown below:


Also note that I configured this step to be only run for the staging environment which will basically allow you to reuse the generated changes script to be deployed to production environment. This also means that reviewed script will be used in all deployments. As a final note: SQL Release will fail if the state of the target database is drifted from the compression state which makes the whole process safer.

The next step is "Review Database Changes" which is a standard Octopus Deploy manual intervention and approval step. I will skip that step here as the documentation is pretty straight forward on this.

Deploy from Database Release Step

The last step is for actually deploying the changes. Once the sign-off is given, the changes to the database can be deployed. In order to start configuring this step, choose the "Redgate - Deploy from Database Release" step type from the "Choose step type" window. The configuration will look something like below:


One big difference here is that this step will be executed on both Staging and Production environments. In fact, this step is the only step that will run against the Production environment in our example here.

Create a Release and Deploy

We are now ready to create a release (alternatively, you can take advantage of "Automatic Release Creation" feature of Octopus Deploy) and deploy to staging. When you start the deployment, it will pause on the manual intervention step as expected and we will see a few artifacts created on right hand side.


You can see the update script, warnings and an HTML report for the changes. If you click on the Changes.html file, it will be download and you can open it up with your choice of web browser. It will give you a nice diff report.


Once you approve, the deployment will go on and the last step will run to actually deploy the changes. When you are happy with the staging environment, you can deploy the changes to Production by clicking the Promote button on Octopus server.


As you can see, only step will run here is the last one. Remember that if the either of the database are drifted between the time of release creating and deployment, SQL Release will fail to deploy to make the process safer.

Obviously, SQL Release and its Octopus Deploy step templates make it easier to integrate with Octopus Deploy for deploying database schema changes in a safe and reliable way. If you feel that you are already struggling with making your database as part of your continuous delivery story, definitely try DLM Automation Suite and SQL Release out. You can also give feedback to SQL Release on its Uservoice page.

Compiling C# Code Into Memory and Executing It with Roslyn

Let me show you how to compile a piece of C# code into memory and execute it with Roslyn. It is super easy if you believe it or not :)
2015-03-31 20:39
Tugberk Ugurlu

For the last couple of days, I have been looking into how to get Razor view engine running outside ASP.NET 5 MVC. It was fairly straight forward but there are a few bits and pieces that you need to stitch together which can be challenging. I will get Razor part in a later post and in this post, I would like to show how to compile a piece of C# code into memory and execute it with Roslyn, which was one of the parts of getting Razor to work outside ASP.NET MVC.

First thing is to install C# code analysis library into you project though NuGet. In other words, installing Roslyn :)

Install-Package Microsoft.CodeAnalysis.CSharp -pre

This will pull down bunch of stuff like Microsoft.CodeAnalysis.Analyzers, System.Collections.Immutable, etc. as its dependencies which is OK. In order to compile the code, we want to first create a SyntaxTree instance. We can do this pretty easily by parsing the code block using the CSharpSyntaxTree.ParseText static method.

SyntaxTree syntaxTree = CSharpSyntaxTree.ParseText(@"
    using System;

    namespace RoslynCompileSample
        public class Writer
            public void Write(string message)

The next step is to create a Compilation object. If you wonder, the compilation object is an immutable representation of a single invocation of the compiler (code comments to the rescue). It is the actual bit which carries the information about syntax trees, reference assemblies and other important stuff which you would usually give as information to the compiler. We can create an instance of a Compilation object through another static method: CSharpCompilation.Create.

string assemblyName = Path.GetRandomFileName();
MetadataReference[] references = new MetadataReference[]

CSharpCompilation compilation = CSharpCompilation.Create(
    syntaxTrees: new[] { syntaxTree },
    references: references,
    options: new CSharpCompilationOptions(OutputKind.DynamicallyLinkedLibrary));

Hard part is now done. The final bit is actually running the compilation and getting the output (in our case, it is a dynamically linked library). To run the actual compilation, we will use the Emit method on the Compilation object. There are a few overloads of this method but we will use the one where we can pass a Stream object in and make the Emit method write the assembly bytes into it. Emit method will give us an instance of an EmitResult object and we can pull the status of the compilation, warnings, failures, etc. from it. Here is the actual code:

using (var ms = new MemoryStream())
    EmitResult result = compilation.Emit(ms);

    if (!result.Success)
        IEnumerable<Diagnostic> failures = result.Diagnostics.Where(diagnostic => 
            diagnostic.IsWarningAsError || 
            diagnostic.Severity == DiagnosticSeverity.Error);

        foreach (Diagnostic diagnostic in failures)
            Console.Error.WriteLine("{0}: {1}", diagnostic.Id, diagnostic.GetMessage());
        ms.Seek(0, SeekOrigin.Begin);
        Assembly assembly = Assembly.Load(ms.ToArray());

As mentioned before, here, we are getting the EmitResult out as a result and looking for its status. If it’s not a success, we get the errors out and output them. If it’s a success, we load the bytes into an Assembly object. The Assembly object you have here is no different the ones that you are used to. From this point on, it’s all up to your ninja reflection skills in order to execute the compiled code. For the purpose of this demo, it was as easy as the below code:

Type type = assembly.GetType("RoslynCompileSample.Writer");
object obj = Activator.CreateInstance(type);
    BindingFlags.Default | BindingFlags.InvokeMethod,
    new object[] { "Hello World" });

This was in a console application and after running the whole thing, I got the expected result:


Pretty sweet and easy! This sample is up on GitHub if you are interested.

Resistance Against London Tube Map Commit History (a.k.a. Git Merge Hell)

It's so easy to end up with git commit history which looks like London tube map. Let's see how we end up with those big, ugly, meaningless commit histories and how to prevent having one.
2015-03-08 17:18
Tugberk Ugurlu

If you have ever been to London, you probably know how complicated London tube map looks like :)

2014-12-17 20.23.39

There is no way for me to understand this unless I look at it for 100 times (and it was actually the reality).

Most of the Git repository commit histories I look at nowadays are not much different from this map. For example, I cloned the original Git source code and wanted to look at its commit history. I wish I didn’t do that as my eyes hurt really bad:

Screenshot 2015-03-07 12.24.31

This is the view of gitk but it doesn’t matter what you use here. It looks as bad as the above one when you are just looking at the log:


Maybe I am being a little skeptical about it and maybe, it’s useful information there. Let’s look a little closer to see what it actually says:


90% of the commits above are rubbish to me. YES, I am talking about those meaningless merge commits. They have no really value, they are just implementation details that happened during development and it makes no sense when I look at them later. I previously blogged about how rebase can help taking away this pain. However, it’s hard to apply rebasing model if you really don’t know what you are doing. I wanted to dig a little deeper in this post and share my opinions on this controversial topic.

How We Are Ending up With This Mess

Let’s take a very simple example and work on that to simulate how we are ending up with a mess like this. I have two repositories: one is called london-tube-main (upstream) and the other one is london-tube-fork (origin). I only have one commit under upstream/master branch and I have one more additional commit under origin/doing-stuff branch as you can see below:


All seems good so far. I have the stable code inside upstream/master and I am working on a new stuff under origin/doing-stuff. Now, let’s make a new commit to upstream/master which we still continue cracking on origin/doing-stuff.


At that point, the origin/doing-stuff branch is out of sync. The logical option here is to sync with upstream/master before continue adding new stuff. In order to do that, I ran git merge upstream/master when I was under origin/doing-stuff. If I look at what happened there after running it, I would see something like this:


Git merge manual explains what actually happened here really well and I copied the description by changing only the references:

Then "git merge upstream/master" will replay the changes made on the upstream/master branch since it diverged from master (i.e., 1) until its current commit (3) on top of master, and record the result in a new commit along with the names of the two parent commits and a log message from the user describing the changes.

Our simple progress flow would look like this now:


At this state, I am in a feature branch working on my stuff. I now recorded a commit practically saying "I synced my branch, yay me!" which is pointless when you want to get this work into your stable branch. Let’s add one more commit on origin/doing-stuff. At the end, we have the following look:


Except from the unnecessary merge commit, it is not that bad but this can get worse. Right now if you want to merge origin/doing-stuff branch into upstream/master, it will be a fast-forward merge which means only updating the branch pointer, without creating a merge commit. However, some prefers to disable this behavior with --no-ff switch for the merge command which makes it possible to create a merge commit even when the merge resolves as a fast-forward. It makes the history look even worse by doing this.


Repeat this process over and over again, you will be really close to your own version of London tube map™!

How Can We Get Rid off This Mess

Let’s go back to one of our earlier states on our example:


To refresh our memories, we now want to keep working on origin/doing-stuff branch but we are out of sync with the upstream/master branch. Previously, we blindly merged upstream/master into origin/doing-stuff branch but this time will rebase origin/doing-stuff onto upstream/master:


We are now in sync with upstream/master but what happened here is actually really clever. When you are doing the rebase, git first takes your new commits and puts the upstream/master commits onto your branch. Later, it will  apply each of your commits one by one. If you don’t have any conflicts, it will be a successful rebase as it was in our case here.


Notice that the 2 commit is now new-2 commit. In reality, commits are identified by SHA1 hash and when you do a rebase, hash of your new commits will be recalculated as the parent commit is now changed, which will result in a rewritten history inside your feature branch. If you try to push your changes to origin/doing-stuff branch now, it will fail as the history is changed:


You can however force your changes to be pushed with git push --force command which will practically replace your history with the remote one:



You should really avoid doing force pushes against a branch which you share with somebody else. You really don’t want to piss your mates off :)

It’s generally OK to force push into a feature branch inside your own fork which you are the only one who is working on it. Also, if you have an open pull request on GitHub attached to that branch, it will update the pull request when you force push which is nice. General rule of thumb here is that you should work on your own fork and shouldn’t push a feature branch into the shared remote upstream repository. This will generally make your life easier.

When you now add new commits and try to get these changes into upstream/master, it will just be a fast-forward merge (unless you have --no-ff merge rule in place).


Now, the history is so much cleaner:



If you are on a long term big project where more than one person is involved, using merging like this in an obnoxious way and it will make you and your team suffer. You will feel in the same way like you felt when you first landed in London and picked up the London tube map, which is confusing, terrified. I agree that applying rebasing is hard but spend some time on this to get it right throughout your team. Do not even hesitate in spending a day to practice the flow in order to get it right. It may seem not important but it actually really is when you need to dig into the history of the code (e.g. git bisect). A few simple rules that I follow which may also be helpful for you.

  • Do your dirty stuff inside your own fork (you can go crazy, no one will care).
  • Force --ff-only globally: git config branch.master.mergeoptions  "--ff-only"
  • When you are on a feature branch, always rebase or pull --rebase.
  • To make the history look more meaningful and modular, make use of git add -p and git rebase –i
  • Never rebase any shared branch onto your feature branch and force push to any shared branch. There is a really good chance of pissing your coworkers off by doing this :)

Software Applications Should Work Like Restaurants

This is a brain dump blog post which I usually don't do but I needed to get this out of my chest. Restaurants and software applications have some common characteristics in terms of how they need to work and this post highlights some of them.
2015-02-14 23:55
Tugberk Ugurlu

I went to a restaurant today and one particular thing struck me there. It made me really interested in writing this brain dump blog post. It was about the fact that restaurants and software applications have a lot in common in terms of how they (should) function. One common characteristic I know is coming from very clever and kind Steve Sanderson on his talk on asynchronous web applications and I won’t go into details on that. I would like to focus on the other common functional characteristic I noticed today but before that, here is a brief background of the story :)

After I walked through the door and sit on a table, the next thing I did was order beers and some onion rings. Then, I went to my table with beers and nearly less than 2 minutes later, the onion rings were in front of me. All looks perfect. These were well-cooked, yummy onion rings.

2015-02-14 18.53.11

After a few sips and finishing the onion rings, I ordered my main course which was a medium-well cooked, 20oz steak. It took about 10 to 15 minutes for that to arrive. At the end of the day, I was happy. It was all OK and I didn’t mind waiting for the steak more than onion rings because it made sense.

Why am I explaining all these? Because this is how most of the software applications should function, especially the HTTP based applications. The typical software application has to deal with data which is represented through the domain of the application. One way or the other, software application users interact with this data. The important fact here is that not all of this data is the same; just like the onion rings and steak were not the same for the restaurant. So, you shouldn’t make the assumption of serving all of the data in the same manner through your software application.

Onion Rings Case

Let’s go back to my restaurant example. It was obvious to me that the restaurant fried the rings before they are being ordered. They were doing this because they know the general order rate for the onion rings. So, they were making a judgment call and were frying the rings before they are being ordered. This raises the the risk of serving the onion rings to each customer with a different heat and freshness level. However, this doesn’t matter that much because:

  1. The restaurant has an expected number in mind and they were not going crazy about this (like frying millions of onion rings). So, they were holding the balance.
  2. The first priority here was to serve good product. Second thing here is to solve the problem of serving the rings fast and spending the least amount of staff resources. All of these two were possible to achieve at the same time.

The same concept they applied there should be applied in your applications and I bet most of us doing this already. In some cases, users just don’t care about the accuracy and the freshness of the application data. Take the Foursquare example. Do I care if I don’t see the newly-created coffee shop inside my search results when I try to find the nearest coffee shops? You mostly don’t as it probably gives you other alternatives which definitely helps you at the time. Would you care if the search result was returned after 30 seconds of waiting? I bet you would be most certainly pissed about it.

Steak Case

If the onion ring case is a very well success, why didn’t they apply the same concept for the steak and serve it as fast as rings? The answer is very simple: serving the steak fast doesn’t make the customer happy if the steak is not cooked in a way the customer wants. The customer wants to tune the 'doneness' level of the steak and as long as they have it, it won’t matter if the serving time takes longer than the onion rings. Similar case applies to software applications, too. For instance, I would like to see my orders on amazon accurately all the time. Making them inaccurate just to serve the data fast would make me unhappy because I don’t care about how fast you are as long as it is a tolerable amount.


Let’s not pretend that every piece of data for our software application is the same. Well-tune the requirements, consider the trade offs and come up with a solid, functional plan to serve your data instead of blindly getting them out through the same door. If you are interested in this, make sure to read on Eventually Consistency. Also, looking into Domain Driven Design is another thing I would recommend.

Finally, pointing to something new or making a point on what we generally do wrong is not the idea of the post here. It was about explaining the concept with a unique example.