How to Use Octopus Deploy Step Templates for SQL Release

In this post, I will go through how you can use SQL Release Octopus Deploy step templates to make the Octopus Deploy Integration of SQL Release easier by going through one of the deployment flows.
2015-04-06 21:39
Tugberk Ugurlu


SQL Release, a set of PowerShell cmdlets from Redgate which automate deploying changes to your production databases, went out of beta and became part of DLM Automation Suite a few days ago. As part of this release, Octopus Deploy step templates for SQL Release are also included inside the suite and in this post, I will go through how you can use these step templates to make the Octopus Deploy Integration of SQL Release easier by going through one of the deployment flows (the recommended one).

If you are trying to integrate your SQL Server databases into your deployment pipeline, I strongly encourage you to try DLM Automation Suite out. At the time of this writing, it has 28-day free trial option. You can also check out the documentation page for DLM Automation Suite documentation and information about included products.

image

Installing SQL Release Step Templates

If you are not familiar with Step Templates, it’s a plugin mechanism that Octopus Deploy has which allows you to get the input from the user through a nice UI and run specific PowerShell script based on the input passed in. A step template is nothing but a structured JSON text and they are hosted inside the Octopus Deploy Step Templates library. They actually don’t need to be hosted there in order for you to use them but it makes it very convenient to find one in a central place.

SQL Release has four step templates to satisfy different flows and use cases.

image

First thing to do in my demo blog post here is to install these step templates into your Octopus server. The way to install a step template is a little different that you might expect.

  1. Go inside a step template page on Octopus Deploy Library web site.
  2. Hit "Copy to clipboard" button on the right hand side.
  3. Go to your Octopus server and navigate to Library (on the top menu)
  4. You should see the "Step templates" pane on the left side. This will open up the step templates page for your Octopus server.
  5. On that page, hit "Import", paste the step template inside the text area and click "Import".

You will end up with a look similar to the following one:

image

After I imported all the step templates for SQL Release, it is time to actually create the deployment process. We will be using only two of these in our example here.

In order to use the SQL Release step templates, you need to have SQL Release installed inside the Octopus Tentacle machine. If you install SQL Release while the Tentacle is running, you need to restart the Tentacle service (through the Tentacle Manager, for example).

Setting up the Octopus Project

Before going through each step of the deployment process, I wanted first show the end look of the Octopus project.

image

At the end, we will have four steps to complete our deployment for this example. A few more things to point out:

  • We will be only deploying the database for the purpose of this demo but you can imagine having your application deployment here as well.
  • We will deploy the database schema changes in two steps in order to allow review and approval of the script.

Download and Extract NuGet Package Step

First step will be to download and extract the NuGet package which contains the scripts folder for the database schema state. This NuGet package will be produced by another DLM Automation Suite tool named SQL CI, a plugin for your CI tool that allows continuous integration for SQL Server databases. The script folder I mentioned here can be produced by a few Redgate tools such as SQL Source Control. The script folder I am using for this sample is hosted on my GitHub repository.

In order to create a package, I fired the following SQL CI command:

sqlci Build --scriptsFolder="D:\github\Geveze\db" --outputFolder="D:\github\Geveze" --packageId="Geveze" --packageVersion="1.0.0"

image

When I have the package created, I pushed it to Octopus Deploy NuGet feed using NuGet Command Line tool:

nuget push Geveze.1.0.0.nupkg -ApiKey API-CMGMYZ1GM95FHJNLWVRQQGQRAPK -Source http://localhost:4000/nuget/packages

Typically, these steps would be performed inside your CI Server but I didn’t want to have CI integration in order not to complicate things more for this demo. Also, check out the SQL CI Documentation for more information about the command line options and other related stuff. We won’t go into details about this tool in this post.

Once I pushed the package to Octopus Deploy NuGet feed, I was able to see the package while I was configuring the step:

image

Create Database Release Step

This is probably the most important step in our process. Here, SQL Release will create the actual changes script and bunch of other artifacts that can be used later. These will be generated based on the package it will obtain from the previous package and the target database which will be used for compression.

In order to add a "Create Database Release Step", you need to hit "Add step" on the Project > Process page. From the "Choose step type" window, choose "Redgate - Create Database Release" option.

image

The step configuration will look something like below:

image

As you can see, I am using Octopus Deploy variables here. The ones that I have are as shown below:

image

Also note that I configured this step to be only run for the staging environment which will basically allow you to reuse the generated changes script to be deployed to production environment. This also means that reviewed script will be used in all deployments. As a final note: SQL Release will fail if the state of the target database is drifted from the compression state which makes the whole process safer.

The next step is "Review Database Changes" which is a standard Octopus Deploy manual intervention and approval step. I will skip that step here as the documentation is pretty straight forward on this.

Deploy from Database Release Step

The last step is for actually deploying the changes. Once the sign-off is given, the changes to the database can be deployed. In order to start configuring this step, choose the "Redgate - Deploy from Database Release" step type from the "Choose step type" window. The configuration will look something like below:

image

One big difference here is that this step will be executed on both Staging and Production environments. In fact, this step is the only step that will run against the Production environment in our example here.

Create a Release and Deploy

We are now ready to create a release (alternatively, you can take advantage of "Automatic Release Creation" feature of Octopus Deploy) and deploy to staging. When you start the deployment, it will pause on the manual intervention step as expected and we will see a few artifacts created on right hand side.

image

You can see the update script, warnings and an HTML report for the changes. If you click on the Changes.html file, it will be download and you can open it up with your choice of web browser. It will give you a nice diff report.

image

Once you approve, the deployment will go on and the last step will run to actually deploy the changes. When you are happy with the staging environment, you can deploy the changes to Production by clicking the Promote button on Octopus server.

image

As you can see, only step will run here is the last one. Remember that if the either of the database are drifted between the time of release creating and deployment, SQL Release will fail to deploy to make the process safer.

Obviously, SQL Release and its Octopus Deploy step templates make it easier to integrate with Octopus Deploy for deploying database schema changes in a safe and reliable way. If you feel that you are already struggling with making your database as part of your continuous delivery story, definitely try DLM Automation Suite and SQL Release out. You can also give feedback to SQL Release on its Uservoice page.

Compiling C# Code Into Memory and Executing It with Roslyn

Let me show you how to compile a piece of C# code into memory and execute it with Roslyn. It is super easy if you believe it or not :)
2015-03-31 20:39
Tugberk Ugurlu


For the last couple of days, I have been looking into how to get Razor view engine running outside ASP.NET 5 MVC. It was fairly straight forward but there are a few bits and pieces that you need to stitch together which can be challenging. I will get Razor part in a later post and in this post, I would like to show how to compile a piece of C# code into memory and execute it with Roslyn, which was one of the parts of getting Razor to work outside ASP.NET MVC.

First thing is to install C# code analysis library into you project though NuGet. In other words, installing Roslyn :)

Install-Package Microsoft.CodeAnalysis.CSharp -pre

This will pull down bunch of stuff like Microsoft.CodeAnalysis.Analyzers, System.Collections.Immutable, etc. as its dependencies which is OK. In order to compile the code, we want to first create a SyntaxTree instance. We can do this pretty easily by parsing the code block using the CSharpSyntaxTree.ParseText static method.

SyntaxTree syntaxTree = CSharpSyntaxTree.ParseText(@"
    using System;

    namespace RoslynCompileSample
    {
        public class Writer
        {
            public void Write(string message)
            {
                Console.WriteLine(message);
            }
        }
    }");

The next step is to create a Compilation object. If you wonder, the compilation object is an immutable representation of a single invocation of the compiler (code comments to the rescue). It is the actual bit which carries the information about syntax trees, reference assemblies and other important stuff which you would usually give as information to the compiler. We can create an instance of a Compilation object through another static method: CSharpCompilation.Create.

string assemblyName = Path.GetRandomFileName();
MetadataReference[] references = new MetadataReference[]
{
    MetadataReference.CreateFromFile(typeof(object).Assembly.Location),
    MetadataReference.CreateFromFile(typeof(Enumerable).Assembly.Location)
};

CSharpCompilation compilation = CSharpCompilation.Create(
    assemblyName,
    syntaxTrees: new[] { syntaxTree },
    references: references,
    options: new CSharpCompilationOptions(OutputKind.DynamicallyLinkedLibrary));

Hard part is now done. The final bit is actually running the compilation and getting the output (in our case, it is a dynamically linked library). To run the actual compilation, we will use the Emit method on the Compilation object. There are a few overloads of this method but we will use the one where we can pass a Stream object in and make the Emit method write the assembly bytes into it. Emit method will give us an instance of an EmitResult object and we can pull the status of the compilation, warnings, failures, etc. from it. Here is the actual code:

using (var ms = new MemoryStream())
{
    EmitResult result = compilation.Emit(ms);

    if (!result.Success)
    {
        IEnumerable<Diagnostic> failures = result.Diagnostics.Where(diagnostic => 
            diagnostic.IsWarningAsError || 
            diagnostic.Severity == DiagnosticSeverity.Error);

        foreach (Diagnostic diagnostic in failures)
        {
            Console.Error.WriteLine("{0}: {1}", diagnostic.Id, diagnostic.GetMessage());
        }
    }
    else
    {
        ms.Seek(0, SeekOrigin.Begin);
        Assembly assembly = Assembly.Load(ms.ToArray());
    }
}

As mentioned before, here, we are getting the EmitResult out as a result and looking for its status. If it’s not a success, we get the errors out and output them. If it’s a success, we load the bytes into an Assembly object. The Assembly object you have here is no different the ones that you are used to. From this point on, it’s all up to your ninja reflection skills in order to execute the compiled code. For the purpose of this demo, it was as easy as the below code:

Type type = assembly.GetType("RoslynCompileSample.Writer");
object obj = Activator.CreateInstance(type);
type.InvokeMember("Write",
    BindingFlags.Default | BindingFlags.InvokeMethod,
    null,
    obj,
    new object[] { "Hello World" });

This was in a console application and after running the whole thing, I got the expected result:

image

Pretty sweet and easy! This sample is up on GitHub if you are interested.

Resistance Against London Tube Map Commit History (a.k.a. Git Merge Hell)

It's so easy to end up with git commit history which looks like London tube map. Let's see how we end up with those big, ugly, meaningless commit histories and how to prevent having one.
2015-03-08 17:18
Tugberk Ugurlu


If you have ever been to London, you probably know how complicated London tube map looks like :)

2014-12-17 20.23.39

There is no way for me to understand this unless I look at it for 100 times (and it was actually the reality).

Most of the Git repository commit histories I look at nowadays are not much different from this map. For example, I cloned the original Git source code and wanted to look at its commit history. I wish I didn’t do that as my eyes hurt really bad:

Screenshot 2015-03-07 12.24.31

This is the view of gitk but it doesn’t matter what you use here. It looks as bad as the above one when you are just looking at the log:

image

Maybe I am being a little skeptical about it and maybe, it’s useful information there. Let’s look a little closer to see what it actually says:

image

90% of the commits above are rubbish to me. YES, I am talking about those meaningless merge commits. They have no really value, they are just implementation details that happened during development and it makes no sense when I look at them later. I previously blogged about how rebase can help taking away this pain. However, it’s hard to apply rebasing model if you really don’t know what you are doing. I wanted to dig a little deeper in this post and share my opinions on this controversial topic.

How We Are Ending up With This Mess

Let’s take a very simple example and work on that to simulate how we are ending up with a mess like this. I have two repositories: one is called london-tube-main (upstream) and the other one is london-tube-fork (origin). I only have one commit under upstream/master branch and I have one more additional commit under origin/doing-stuff branch as you can see below:

Picture6

All seems good so far. I have the stable code inside upstream/master and I am working on a new stuff under origin/doing-stuff. Now, let’s make a new commit to upstream/master which we still continue cracking on origin/doing-stuff.

Picture5

At that point, the origin/doing-stuff branch is out of sync. The logical option here is to sync with upstream/master before continue adding new stuff. In order to do that, I ran git merge upstream/master when I was under origin/doing-stuff. If I look at what happened there after running it, I would see something like this:

image

Git merge manual explains what actually happened here really well and I copied the description by changing only the references:

Then "git merge upstream/master" will replay the changes made on the upstream/master branch since it diverged from master (i.e., 1) until its current commit (3) on top of master, and record the result in a new commit along with the names of the two parent commits and a log message from the user describing the changes.

Our simple progress flow would look like this now:

Picture7

At this state, I am in a feature branch working on my stuff. I now recorded a commit practically saying "I synced my branch, yay me!" which is pointless when you want to get this work into your stable branch. Let’s add one more commit on origin/doing-stuff. At the end, we have the following look:

Picture8

Except from the unnecessary merge commit, it is not that bad but this can get worse. Right now if you want to merge origin/doing-stuff branch into upstream/master, it will be a fast-forward merge which means only updating the branch pointer, without creating a merge commit. However, some prefers to disable this behavior with --no-ff switch for the merge command which makes it possible to create a merge commit even when the merge resolves as a fast-forward. It makes the history look even worse by doing this.

image

Repeat this process over and over again, you will be really close to your own version of London tube map™!

How Can We Get Rid off This Mess

Let’s go back to one of our earlier states on our example:

Picture5

To refresh our memories, we now want to keep working on origin/doing-stuff branch but we are out of sync with the upstream/master branch. Previously, we blindly merged upstream/master into origin/doing-stuff branch but this time will rebase origin/doing-stuff onto upstream/master:

image

We are now in sync with upstream/master but what happened here is actually really clever. When you are doing the rebase, git first takes your new commits and puts the upstream/master commits onto your branch. Later, it will  apply each of your commits one by one. If you don’t have any conflicts, it will be a successful rebase as it was in our case here.

Picture10

Notice that the 2 commit is now new-2 commit. In reality, commits are identified by SHA1 hash and when you do a rebase, hash of your new commits will be recalculated as the parent commit is now changed, which will result in a rewritten history inside your feature branch. If you try to push your changes to origin/doing-stuff branch now, it will fail as the history is changed:

image

You can however force your changes to be pushed with git push --force command which will practically replace your history with the remote one:

image

DANGER ALERT!

You should really avoid doing force pushes against a branch which you share with somebody else. You really don’t want to piss your mates off :)

It’s generally OK to force push into a feature branch inside your own fork which you are the only one who is working on it. Also, if you have an open pull request on GitHub attached to that branch, it will update the pull request when you force push which is nice. General rule of thumb here is that you should work on your own fork and shouldn’t push a feature branch into the shared remote upstream repository. This will generally make your life easier.

When you now add new commits and try to get these changes into upstream/master, it will just be a fast-forward merge (unless you have --no-ff merge rule in place).

image

Now, the history is so much cleaner:

image

Conclusion

If you are on a long term big project where more than one person is involved, using merging like this in an obnoxious way and it will make you and your team suffer. You will feel in the same way like you felt when you first landed in London and picked up the London tube map, which is confusing, terrified. I agree that applying rebasing is hard but spend some time on this to get it right throughout your team. Do not even hesitate in spending a day to practice the flow in order to get it right. It may seem not important but it actually really is when you need to dig into the history of the code (e.g. git bisect). A few simple rules that I follow which may also be helpful for you.

  • Do your dirty stuff inside your own fork (you can go crazy, no one will care).
  • Force --ff-only globally: git config branch.master.mergeoptions  "--ff-only"
  • When you are on a feature branch, always rebase or pull --rebase.
  • To make the history look more meaningful and modular, make use of git add -p and git rebase –i
  • Never rebase any shared branch onto your feature branch and force push to any shared branch. There is a really good chance of pissing your coworkers off by doing this :)

Software Applications Should Work Like Restaurants

This is a brain dump blog post which I usually don't do but I needed to get this out of my chest. Restaurants and software applications have some common characteristics in terms of how they need to work and this post highlights some of them.
2015-02-14 23:55
Tugberk Ugurlu


I went to a restaurant today and one particular thing struck me there. It made me really interested in writing this brain dump blog post. It was about the fact that restaurants and software applications have a lot in common in terms of how they (should) function. One common characteristic I know is coming from very clever and kind Steve Sanderson on his talk on asynchronous web applications and I won’t go into details on that. I would like to focus on the other common functional characteristic I noticed today but before that, here is a brief background of the story :)

After I walked through the door and sit on a table, the next thing I did was order beers and some onion rings. Then, I went to my table with beers and nearly less than 2 minutes later, the onion rings were in front of me. All looks perfect. These were well-cooked, yummy onion rings.

2015-02-14 18.53.11

After a few sips and finishing the onion rings, I ordered my main course which was a medium-well cooked, 20oz steak. It took about 10 to 15 minutes for that to arrive. At the end of the day, I was happy. It was all OK and I didn’t mind waiting for the steak more than onion rings because it made sense.

Why am I explaining all these? Because this is how most of the software applications should function, especially the HTTP based applications. The typical software application has to deal with data which is represented through the domain of the application. One way or the other, software application users interact with this data. The important fact here is that not all of this data is the same; just like the onion rings and steak were not the same for the restaurant. So, you shouldn’t make the assumption of serving all of the data in the same manner through your software application.

Onion Rings Case

Let’s go back to my restaurant example. It was obvious to me that the restaurant fried the rings before they are being ordered. They were doing this because they know the general order rate for the onion rings. So, they were making a judgment call and were frying the rings before they are being ordered. This raises the the risk of serving the onion rings to each customer with a different heat and freshness level. However, this doesn’t matter that much because:

  1. The restaurant has an expected number in mind and they were not going crazy about this (like frying millions of onion rings). So, they were holding the balance.
  2. The first priority here was to serve good product. Second thing here is to solve the problem of serving the rings fast and spending the least amount of staff resources. All of these two were possible to achieve at the same time.

The same concept they applied there should be applied in your applications and I bet most of us doing this already. In some cases, users just don’t care about the accuracy and the freshness of the application data. Take the Foursquare example. Do I care if I don’t see the newly-created coffee shop inside my search results when I try to find the nearest coffee shops? You mostly don’t as it probably gives you other alternatives which definitely helps you at the time. Would you care if the search result was returned after 30 seconds of waiting? I bet you would be most certainly pissed about it.

Steak Case

If the onion ring case is a very well success, why didn’t they apply the same concept for the steak and serve it as fast as rings? The answer is very simple: serving the steak fast doesn’t make the customer happy if the steak is not cooked in a way the customer wants. The customer wants to tune the 'doneness' level of the steak and as long as they have it, it won’t matter if the serving time takes longer than the onion rings. Similar case applies to software applications, too. For instance, I would like to see my orders on amazon accurately all the time. Making them inaccurate just to serve the data fast would make me unhappy because I don’t care about how fast you are as long as it is a tolerable amount.

Conclusion

Let’s not pretend that every piece of data for our software application is the same. Well-tune the requirements, consider the trade offs and come up with a solid, functional plan to serve your data instead of blindly getting them out through the same door. If you are interested in this, make sure to read on Eventually Consistency. Also, looking into Domain Driven Design is another thing I would recommend.

Finally, pointing to something new or making a point on what we generally do wrong is not the idea of the post here. It was about explaining the concept with a unique example.

Dependency Injection: Inject Your Dependencies, Period!

Reasons on why I prefer dependency injection over static accessors.
2014-11-18 12:39
Tugberk Ugurlu


There is a discussion going on inside ASP.NET/Logging repository whether we should have static Logger available everywhere or not. I am quite against it and I stated why with a few comments there but hopefully with this post, I will try to address why I think the dependency injection path is better for most of the cases.

Let’s take an example and explain it further based on that.

public class FooManager : IFooManager
{
    private readonly IClock _clock;
    private readonly ILogger _logger;
    private readonly IEnvironmentInfo _environmentInfo;

    public FooManager(IClock clock, ILogger logger, IEnvironmentInfo environmentInfo)
    {
        _clock = clock;
        _logger = logger;
        _environmentInfo = environmentInfo;
    }

    // ...
}

This is a simple class  whose job is not important in our context. The way I see this class inside Visual Studio is as below:

image

In any other text editor:

image

Inside Visual Studio when I am writing tests:

Screenshot 2014-11-18 13.07.38

There are two things fundamental things why I would like this approach.

The Requirements of the Component is Exposed Clearly

When I look at this class inside any editor, I can say that this class cannot function properly without IClock, ILogger and IEnvironmentInfo implementation instances as there is no other constructor (preferably, I would also do null checks for constructor parameters but I skipped that to keep the example clean). Instead of the above implementation, imagine that I have the following one:

public class FooManager : IFooManager
{
    public void Run()
    {
        if(DateTime.UtcNow > EnvironmentInfo.Instance.LicanceExpiresInUtc)
        {
            Logger.Instance.Log("Cannot run as licance has expired.");
        }

        // Do other stuff here...
    }
}

With this approach, we are relying on static instances of the components (and they are possibly thread-safe and singleton). What is wrong with this approach? I’m not sure what’s the general idea for this but here are my reasons.

First thing for me to do when I open up a C# code file inside visual studio is to press CTRL+M, O to get an idea about the component. It has been kind of an habit for me. Here how it looks like when I do that for this class:

image

I have no idea what this class needs to function properly. I have also no idea what type of environmental context it relies on. Please note that this issue is not that big of a problem for a class which is this simple but I imagine that your component will have a few other methods, possible other interface implementations, private fields, etc.

I Don’t Need to Look at the Implementation to See What It is Using

When I try to construct the class inside a test method in Visual Studio, I can super easily see what I need to supply to make this class function the way I want. If you are using another editor which doesn’t have a tooling support to show you constructor parameters, you are still safe as you would get compile errors if you don’t supply any required parameters. With the above static instance approach, however, you are on your own with this issue in my opinion. It doesn’t have any constructor parameters for you to easily see what it needs. It relies on static instances which are hard and dirty to mock. For me, it’s horrible to see a C# code written like that.

When you try to write a test against this class now, here is what it looks like if you are inside Visual Studio:

Screenshot 2014-11-18 14.17.13

You have no idea what it needs. F12 and try to figure out what it needs by looking at the implementation and good luck with mocking the static read-only members and DateTime.Now :)

I’m intentionally skipping why you shouldn’t use DateTime.Now directly inside your library (even, the whole DateTime API). The reasons are variable depending on the context. However, here are a few further readings for you on this subject:

It is Still Bad Even If It is not Static

Yes, it’s still bad. Let me give you a hint: Service Locator Pattern.

Tags