Exciting Things About ASP.NET 5 Series: Build Only Dependencies

In this very exciting post, I would like to talk about build only dependencies whose code can be compiled into target project and the dependency won’t be shown as a dependency.
2015-04-28 07:48
Tugberk Ugurlu


Web development experience with .NET has never seen a drastic change like this since its birth day. Yes, I’m talking about ASP.NET 5 :) I have been putting my toes into this water for a while now and a few days ago, I started a new blog post series about ASP.NET 5 (with hopes that I will continue this time :)). To be more specific, I’m planning on writing about the things I am actually excited about this new cloud optimized (TM) runtime. Those things could be anything which will come from ASP.NET GitHub account: things I like about the development process, Visual Studio tooling experience for ASP.NET 5, bowels of .NET Execution Runtime, tiny little things about the frameworks like MVC, Identity, Entity Framework.

In this very exciting post, I would like to talk about build only dependencies whose code can be compiled into target project.

BIG ASS CAUTION! At the time of this writing, I am using DNX 1.0.0-beta5-11611 version. As things are moving really fast in this new world, it’s very likely that the things explained here will have been changed as you read this post. So, be aware of this and try to explore the things that are changed to figure out what are the corresponding new things.

Also, inside this post I am referencing a lot of things from ASP.NET GitHub repositories. In order to be sure that the links won’t break in the future, I’m actually referring them by getting permanent links to the files on GitHub. So, these links are actually referring the files from the latest commit at the time of this writing and they have a potential to be changed, too. Read the "Getting permanent links to files" post to figure what this actually is.

The Problem

From the start of NuGet, it has been a real pain to have source file dependencies. There are some examples of this like TaskHelpers.Sources. When you install this package, it will end up inside your codebase.

image

The nice thing about this type of source dependencies is that you don’t need to fight with DLL hell. You can have one version of this package and your consumer can have another version of it. As the source files you pull down from NuGet has no public members, there will be no problems whatsoever as the code is compiled into their assembly separately. However, there are several problems with the way we are getting them in:

  • I am committing this code into source control system which is weird.
  • How about updates? What happens if I make a change to that file?

So, it wasn’t that good of an approach we had there but ASP.NET 5 has a top notch solution this problem: build only dependencies.

Consuming Build Only Dependencies

These are the kind of dependencies that you can pull in and it will just be compiled into your stuff. As you can also guess, it won’t be shown as a dependency. Let’s see an example!

One of the packages that support this concept is Microsoft.Framework.CommandLineUtils package. You can pull this down as a build-only dependency by declaring it inside your project.json file as below:

{
    "version": "1.0.0-*",

    "dependencies": {
        "Microsoft.Framework.CommandLineUtils": { 
            "version": "1.0.0-beta5-11611", "type": "build" 
        }
    },

    // ...
}

Notice the type field there.  That indicates the type of the dependency. Let’s stop here and without doing anything else further, run dnu pack to get a NuGet package out. When we look at the manifest of the generated NuGet package, we won’t see any sign of the build dependency there:

image

Makes sense. Let’s peak inside the assembly now.

image

That’s what I expected to see. All the stuff distributed with that packages is compiled into my target assembly. As you can guess, I can use these stuff inside my project without any problems:

using Microsoft.Framework.Runtime.Common.CommandLine;

namespace AspNet5CommandLineSample
{
    public class Program
    {
        public void Main(string[] args)
        {
            var app = new CommandLineApplication();
        }
    }
}

You may ask that ASP.NET 5 applications can work without assemblies on disk. That’s true and at that point, this will end up being compiled into the target assembly in-memory.

If you look at what I committed to my source control system, it’s barely nothing which solves one of the biggest pains of source packages.

Generating Build Only Dependencies

Generating libraries which can be consumed as a build only dependency is also fairly simple but there are some little things which doesn’t make sense. Assuming I have a library called AspNet5Utils and it has the following internal type:

namespace AspNet5Utils
{
    internal static class StringExtensions
    {
        internal static string Suffix(this string value, string suffix)
        {
            return $"{value}-{suffix}";
        }
    }
}

If you want this type to end up as a build dependency, you need to declare this as shared inside the project.json file.

{
    "version": "1.0.0-*",

    "shared": "**/*.cs",

    "dependencies": {
    },

    // ...
}

Doing this will give a hint to dnu pack command to pack these types into the shared folder inside the NuGet package.

image

Notice that there is also an assembly generated there. Maybe there is a reason behind why this is there but as I don’t have any type which ends up inside an assembly, I would expect this to not have one at all. Indeed, if you decompile the assembly, you will see that nothing is there:

image

In order to consume this package, you don’t actually need to distribute this through NuGet if you only want to consume this inside the same solution. As the dependency consumption is unified in ASP.NET 5, this can easy be a project dependency as you would expect:

{
    "version": "1.0.0-*",

    "dependencies": {
        "AspNet5Utils": { "version": "", "type": "build" }
    },

    // ..
}

In my opinion, this is one of the many powerful and yet simple concepts that ASP.NET 5 has brought to us. Enjoy!

How Azure Web Apps Hosts an ASP.NET 5 Application

ASP.NET 5 application has totally a different directory structure when you try to publish it and it wasn't clear for me how Azure Web Apps is actually able to host an ASP.NET 5 application. If you are confused on this as well, the answer is here.
2015-04-12 10:13
Tugberk Ugurlu


I want to write this quick post because figuring out how an ASP.NET 5 application is hosted under Azure Web Apps was a big question for me. Some information is already there on this topic but the concept wasn’t crystal clear because when you look at the packed version of an ASP.NET 5 web application, it has the following structure on disk:

image

It will even get more interesting when you look inside the wwwroot folder:

image

We have the static files, bin folder which only contains AspNet.Loader.dll inside it and a web.config file. The most interesting bit here is the information inside the web.config file and this information will be read by Helios:

<?xml version="1.0" encoding="utf-8"?>
<configuration>
  <appSettings>
    <add key="bootstrapper-version" value="1.0.0-beta4-11526" />
    <add key="runtime-path" value="..\approot\packages" />
    <add key="dnx-version" value="" />
    <add key="dnx-clr" value="" />
    <add key="dnx-app-base" value="..\approot\src\ConfyConf.Client.Web" />
  </appSettings>
</configuration>

web.config file gives us enough evidence that the wwwroot is the directory that we need to point IIS to and then Helios will read these application settings information to figure out where the application actually is, where the dependencies and packages are, etc.. Let’s deploy the application using Visual Studio Publish feature. I created a brand new Azure Web App on the fly and hit publish:

image

When the deployment is completed, the web site is immediately up:

image

Let’s look at how the directory structure look like after the deployment:

image

Two interesting bits here are approot and wwwroot folders. The question here is that how Azure Web App knew to look into wwwroot folder. It was actually dead simple but it wasn’t obvious at the first glance. Before showing the answer, let’s have a look what IIS Express does to host an ASP.NET 5 application which will give us an hint on the answer.

I fired up the application through the Visual Studio to get IIS Express host my application. After the application is up, I dug into Task Manager to get the command line arguments for IIS Express:

iisexpress.exe    10736    Running    Tugberk    00     33,804 K    33    "C:\Program Files (x86)\IIS Express\iisexpress.exe"  /config:"C:\Users\Tugberk\Documents\IISExpress\config\applicationhost.config"  /site:"WebApplication10" /apppool:"Clr4IntegratedAppPool"    IIS Express Worker Process

This points us to applicationhost.config file and WebApplication10 site inside it. When you look at the site node for WebApplication10, you will see that some of the magic is actually happening there:

<site name="WebApplication10" id="77">
    <application path="/" applicationPool="Clr4IntegratedAppPool">
        <virtualDirectory path="/" physicalPath="D:\apps\WebApplication10\src\WebApplication10\wwwroot" />
    </application>
    <bindings>
        <binding protocol="http" bindingInformation="*:47112:localhost" />
    </bindings>
</site>

wwwroot is pointed as a virtual directory for the web application here for the root path. So, is this information helping us to see how Azure Web App is hosting our app? Absolutely! It gives us the information that something similar should be configured on Azure Web Apps side that it sees the wwwroot folder as the root. If you navigate to Configure section for your Azure Web App and scroll down to the bottom, you will see the virtual directory configuration there.

azure-aspnet5-virtual-directory

Clever! My guess is that this configuration was put there when I was publishing the Web Application through web deploy inside the Visual Studio. Digging into Web Publish Activity output could give us more information about when exactly this configuration is set.

How to Use Octopus Deploy Step Templates for SQL Release

In this post, I will go through how you can use SQL Release Octopus Deploy step templates to make the Octopus Deploy Integration of SQL Release easier by going through one of the deployment flows.
2015-04-06 21:39
Tugberk Ugurlu


SQL Release, a set of PowerShell cmdlets from Redgate which automate deploying changes to your production databases, went out of beta and became part of DLM Automation Suite a few days ago. As part of this release, Octopus Deploy step templates for SQL Release are also included inside the suite and in this post, I will go through how you can use these step templates to make the Octopus Deploy Integration of SQL Release easier by going through one of the deployment flows (the recommended one).

If you are trying to integrate your SQL Server databases into your deployment pipeline, I strongly encourage you to try DLM Automation Suite out. At the time of this writing, it has 28-day free trial option. You can also check out the documentation page for DLM Automation Suite documentation and information about included products.

image

Installing SQL Release Step Templates

If you are not familiar with Step Templates, it’s a plugin mechanism that Octopus Deploy has which allows you to get the input from the user through a nice UI and run specific PowerShell script based on the input passed in. A step template is nothing but a structured JSON text and they are hosted inside the Octopus Deploy Step Templates library. They actually don’t need to be hosted there in order for you to use them but it makes it very convenient to find one in a central place.

SQL Release has four step templates to satisfy different flows and use cases.

image

First thing to do in my demo blog post here is to install these step templates into your Octopus server. The way to install a step template is a little different that you might expect.

  1. Go inside a step template page on Octopus Deploy Library web site.
  2. Hit "Copy to clipboard" button on the right hand side.
  3. Go to your Octopus server and navigate to Library (on the top menu)
  4. You should see the "Step templates" pane on the left side. This will open up the step templates page for your Octopus server.
  5. On that page, hit "Import", paste the step template inside the text area and click "Import".

You will end up with a look similar to the following one:

image

After I imported all the step templates for SQL Release, it is time to actually create the deployment process. We will be using only two of these in our example here.

In order to use the SQL Release step templates, you need to have SQL Release installed inside the Octopus Tentacle machine. If you install SQL Release while the Tentacle is running, you need to restart the Tentacle service (through the Tentacle Manager, for example).

Setting up the Octopus Project

Before going through each step of the deployment process, I wanted first show the end look of the Octopus project.

image

At the end, we will have four steps to complete our deployment for this example. A few more things to point out:

  • We will be only deploying the database for the purpose of this demo but you can imagine having your application deployment here as well.
  • We will deploy the database schema changes in two steps in order to allow review and approval of the script.

Download and Extract NuGet Package Step

First step will be to download and extract the NuGet package which contains the scripts folder for the database schema state. This NuGet package will be produced by another DLM Automation Suite tool named SQL CI, a plugin for your CI tool that allows continuous integration for SQL Server databases. The script folder I mentioned here can be produced by a few Redgate tools such as SQL Source Control. The script folder I am using for this sample is hosted on my GitHub repository.

In order to create a package, I fired the following SQL CI command:

sqlci Build --scriptsFolder="D:\github\Geveze\db" --outputFolder="D:\github\Geveze" --packageId="Geveze" --packageVersion="1.0.0"

image

When I have the package created, I pushed it to Octopus Deploy NuGet feed using NuGet Command Line tool:

nuget push Geveze.1.0.0.nupkg -ApiKey API-CMGMYZ1GM95FHJNLWVRQQGQRAPK -Source http://localhost:4000/nuget/packages

Typically, these steps would be performed inside your CI Server but I didn’t want to have CI integration in order not to complicate things more for this demo. Also, check out the SQL CI Documentation for more information about the command line options and other related stuff. We won’t go into details about this tool in this post.

Once I pushed the package to Octopus Deploy NuGet feed, I was able to see the package while I was configuring the step:

image

Create Database Release Step

This is probably the most important step in our process. Here, SQL Release will create the actual changes script and bunch of other artifacts that can be used later. These will be generated based on the package it will obtain from the previous package and the target database which will be used for compression.

In order to add a "Create Database Release Step", you need to hit "Add step" on the Project > Process page. From the "Choose step type" window, choose "Redgate - Create Database Release" option.

image

The step configuration will look something like below:

image

As you can see, I am using Octopus Deploy variables here. The ones that I have are as shown below:

image

Also note that I configured this step to be only run for the staging environment which will basically allow you to reuse the generated changes script to be deployed to production environment. This also means that reviewed script will be used in all deployments. As a final note: SQL Release will fail if the state of the target database is drifted from the compression state which makes the whole process safer.

The next step is "Review Database Changes" which is a standard Octopus Deploy manual intervention and approval step. I will skip that step here as the documentation is pretty straight forward on this.

Deploy from Database Release Step

The last step is for actually deploying the changes. Once the sign-off is given, the changes to the database can be deployed. In order to start configuring this step, choose the "Redgate - Deploy from Database Release" step type from the "Choose step type" window. The configuration will look something like below:

image

One big difference here is that this step will be executed on both Staging and Production environments. In fact, this step is the only step that will run against the Production environment in our example here.

Create a Release and Deploy

We are now ready to create a release (alternatively, you can take advantage of "Automatic Release Creation" feature of Octopus Deploy) and deploy to staging. When you start the deployment, it will pause on the manual intervention step as expected and we will see a few artifacts created on right hand side.

image

You can see the update script, warnings and an HTML report for the changes. If you click on the Changes.html file, it will be download and you can open it up with your choice of web browser. It will give you a nice diff report.

image

Once you approve, the deployment will go on and the last step will run to actually deploy the changes. When you are happy with the staging environment, you can deploy the changes to Production by clicking the Promote button on Octopus server.

image

As you can see, only step will run here is the last one. Remember that if the either of the database are drifted between the time of release creating and deployment, SQL Release will fail to deploy to make the process safer.

Obviously, SQL Release and its Octopus Deploy step templates make it easier to integrate with Octopus Deploy for deploying database schema changes in a safe and reliable way. If you feel that you are already struggling with making your database as part of your continuous delivery story, definitely try DLM Automation Suite and SQL Release out. You can also give feedback to SQL Release on its Uservoice page.

Compiling C# Code Into Memory and Executing It with Roslyn

Let me show you how to compile a piece of C# code into memory and execute it with Roslyn. It is super easy if you believe it or not :)
2015-03-31 20:39
Tugberk Ugurlu


For the last couple of days, I have been looking into how to get Razor view engine running outside ASP.NET 5 MVC. It was fairly straight forward but there are a few bits and pieces that you need to stitch together which can be challenging. I will get Razor part in a later post and in this post, I would like to show how to compile a piece of C# code into memory and execute it with Roslyn, which was one of the parts of getting Razor to work outside ASP.NET MVC.

First thing is to install C# code analysis library into you project though NuGet. In other words, installing Roslyn :)

Install-Package Microsoft.CodeAnalysis.CSharp -pre

This will pull down bunch of stuff like Microsoft.CodeAnalysis.Analyzers, System.Collections.Immutable, etc. as its dependencies which is OK. In order to compile the code, we want to first create a SyntaxTree instance. We can do this pretty easily by parsing the code block using the CSharpSyntaxTree.ParseText static method.

SyntaxTree syntaxTree = CSharpSyntaxTree.ParseText(@"
    using System;

    namespace RoslynCompileSample
    {
        public class Writer
        {
            public void Write(string message)
            {
                Console.WriteLine(message);
            }
        }
    }");

The next step is to create a Compilation object. If you wonder, the compilation object is an immutable representation of a single invocation of the compiler (code comments to the rescue). It is the actual bit which carries the information about syntax trees, reference assemblies and other important stuff which you would usually give as information to the compiler. We can create an instance of a Compilation object through another static method: CSharpCompilation.Create.

string assemblyName = Path.GetRandomFileName();
MetadataReference[] references = new MetadataReference[]
{
    MetadataReference.CreateFromFile(typeof(object).Assembly.Location),
    MetadataReference.CreateFromFile(typeof(Enumerable).Assembly.Location)
};

CSharpCompilation compilation = CSharpCompilation.Create(
    assemblyName,
    syntaxTrees: new[] { syntaxTree },
    references: references,
    options: new CSharpCompilationOptions(OutputKind.DynamicallyLinkedLibrary));

Hard part is now done. The final bit is actually running the compilation and getting the output (in our case, it is a dynamically linked library). To run the actual compilation, we will use the Emit method on the Compilation object. There are a few overloads of this method but we will use the one where we can pass a Stream object in and make the Emit method write the assembly bytes into it. Emit method will give us an instance of an EmitResult object and we can pull the status of the compilation, warnings, failures, etc. from it. Here is the actual code:

using (var ms = new MemoryStream())
{
    EmitResult result = compilation.Emit(ms);

    if (!result.Success)
    {
        IEnumerable<Diagnostic> failures = result.Diagnostics.Where(diagnostic => 
            diagnostic.IsWarningAsError || 
            diagnostic.Severity == DiagnosticSeverity.Error);

        foreach (Diagnostic diagnostic in failures)
        {
            Console.Error.WriteLine("{0}: {1}", diagnostic.Id, diagnostic.GetMessage());
        }
    }
    else
    {
        ms.Seek(0, SeekOrigin.Begin);
        Assembly assembly = Assembly.Load(ms.ToArray());
    }
}

As mentioned before, here, we are getting the EmitResult out as a result and looking for its status. If it’s not a success, we get the errors out and output them. If it’s a success, we load the bytes into an Assembly object. The Assembly object you have here is no different the ones that you are used to. From this point on, it’s all up to your ninja reflection skills in order to execute the compiled code. For the purpose of this demo, it was as easy as the below code:

Type type = assembly.GetType("RoslynCompileSample.Writer");
object obj = Activator.CreateInstance(type);
type.InvokeMember("Write",
    BindingFlags.Default | BindingFlags.InvokeMethod,
    null,
    obj,
    new object[] { "Hello World" });

This was in a console application and after running the whole thing, I got the expected result:

image

Pretty sweet and easy! This sample is up on GitHub if you are interested.

Resistance Against London Tube Map Commit History (a.k.a. Git Merge Hell)

It's so easy to end up with git commit history which looks like London tube map. Let's see how we end up with those big, ugly, meaningless commit histories and how to prevent having one.
2015-03-08 17:18
Tugberk Ugurlu


If you have ever been to London, you probably know how complicated London tube map looks like :)

2014-12-17 20.23.39

There is no way for me to understand this unless I look at it for 100 times (and it was actually the reality).

Most of the Git repository commit histories I look at nowadays are not much different from this map. For example, I cloned the original Git source code and wanted to look at its commit history. I wish I didn’t do that as my eyes hurt really bad:

Screenshot 2015-03-07 12.24.31

This is the view of gitk but it doesn’t matter what you use here. It looks as bad as the above one when you are just looking at the log:

image

Maybe I am being a little skeptical about it and maybe, it’s useful information there. Let’s look a little closer to see what it actually says:

image

90% of the commits above are rubbish to me. YES, I am talking about those meaningless merge commits. They have no really value, they are just implementation details that happened during development and it makes no sense when I look at them later. I previously blogged about how rebase can help taking away this pain. However, it’s hard to apply rebasing model if you really don’t know what you are doing. I wanted to dig a little deeper in this post and share my opinions on this controversial topic.

How We Are Ending up With This Mess

Let’s take a very simple example and work on that to simulate how we are ending up with a mess like this. I have two repositories: one is called london-tube-main (upstream) and the other one is london-tube-fork (origin). I only have one commit under upstream/master branch and I have one more additional commit under origin/doing-stuff branch as you can see below:

Picture6

All seems good so far. I have the stable code inside upstream/master and I am working on a new stuff under origin/doing-stuff. Now, let’s make a new commit to upstream/master which we still continue cracking on origin/doing-stuff.

Picture5

At that point, the origin/doing-stuff branch is out of sync. The logical option here is to sync with upstream/master before continue adding new stuff. In order to do that, I ran git merge upstream/master when I was under origin/doing-stuff. If I look at what happened there after running it, I would see something like this:

image

Git merge manual explains what actually happened here really well and I copied the description by changing only the references:

Then "git merge upstream/master" will replay the changes made on the upstream/master branch since it diverged from master (i.e., 1) until its current commit (3) on top of master, and record the result in a new commit along with the names of the two parent commits and a log message from the user describing the changes.

Our simple progress flow would look like this now:

Picture7

At this state, I am in a feature branch working on my stuff. I now recorded a commit practically saying "I synced my branch, yay me!" which is pointless when you want to get this work into your stable branch. Let’s add one more commit on origin/doing-stuff. At the end, we have the following look:

Picture8

Except from the unnecessary merge commit, it is not that bad but this can get worse. Right now if you want to merge origin/doing-stuff branch into upstream/master, it will be a fast-forward merge which means only updating the branch pointer, without creating a merge commit. However, some prefers to disable this behavior with --no-ff switch for the merge command which makes it possible to create a merge commit even when the merge resolves as a fast-forward. It makes the history look even worse by doing this.

image

Repeat this process over and over again, you will be really close to your own version of London tube map™!

How Can We Get Rid off This Mess

Let’s go back to one of our earlier states on our example:

Picture5

To refresh our memories, we now want to keep working on origin/doing-stuff branch but we are out of sync with the upstream/master branch. Previously, we blindly merged upstream/master into origin/doing-stuff branch but this time will rebase origin/doing-stuff onto upstream/master:

image

We are now in sync with upstream/master but what happened here is actually really clever. When you are doing the rebase, git first takes your new commits and puts the upstream/master commits onto your branch. Later, it will  apply each of your commits one by one. If you don’t have any conflicts, it will be a successful rebase as it was in our case here.

Picture10

Notice that the 2 commit is now new-2 commit. In reality, commits are identified by SHA1 hash and when you do a rebase, hash of your new commits will be recalculated as the parent commit is now changed, which will result in a rewritten history inside your feature branch. If you try to push your changes to origin/doing-stuff branch now, it will fail as the history is changed:

image

You can however force your changes to be pushed with git push --force command which will practically replace your history with the remote one:

image

DANGER ALERT!

You should really avoid doing force pushes against a branch which you share with somebody else. You really don’t want to piss your mates off :)

It’s generally OK to force push into a feature branch inside your own fork which you are the only one who is working on it. Also, if you have an open pull request on GitHub attached to that branch, it will update the pull request when you force push which is nice. General rule of thumb here is that you should work on your own fork and shouldn’t push a feature branch into the shared remote upstream repository. This will generally make your life easier.

When you now add new commits and try to get these changes into upstream/master, it will just be a fast-forward merge (unless you have --no-ff merge rule in place).

image

Now, the history is so much cleaner:

image

Conclusion

If you are on a long term big project where more than one person is involved, using merging like this in an obnoxious way and it will make you and your team suffer. You will feel in the same way like you felt when you first landed in London and picked up the London tube map, which is confusing, terrified. I agree that applying rebasing is hard but spend some time on this to get it right throughout your team. Do not even hesitate in spending a day to practice the flow in order to get it right. It may seem not important but it actually really is when you need to dig into the history of the code (e.g. git bisect). A few simple rules that I follow which may also be helpful for you.

  • Do your dirty stuff inside your own fork (you can go crazy, no one will care).
  • Force --ff-only globally: git config branch.master.mergeoptions  "--ff-only"
  • When you are on a feature branch, always rebase or pull --rebase.
  • To make the history look more meaningful and modular, make use of git add -p and git rebase –i
  • Never rebase any shared branch onto your feature branch and force push to any shared branch. There is a really good chance of pissing your coworkers off by doing this :)

Tags