As I blogged earlier, I had the privilege to attend NDC Oslo 2016 as a speaker this year. It was a fabulous experience and I took a different approach at NDC Oslo this year and rather than improving my existing knowledge, I decided to take different perspective on problems. There were really a few areas I wanted to get more information on from the World class experts:
- Machine Learning
- Functional Programming
- Soft Skills
Of course, this is NDC and you have people like Mark Rendle and Rob Conery. So, it was inevitable that I ended up at sessions which were just fun :) Nevertheless, it was pretty useful 3 days for me at the conference and it feels like I achieved my end goal.
One other amazing aspect of NDC Conferences is that all of the talks are being recorded and nearly all of those recordings are now up on Vimeo for public consumption :)
Here is the list of sessions I had a chance to attend in person:
- Keynote: Yesterday’s Technology is Dead, Today’s is on Life Support - Troy Hunt
- What every Node.js developer needs to know about Elixir - Bryan Hunter
- Intro to Azure Machine Learning: Predict Who Survives the Titanic - Jennifer Marsman
- R for the .NET Developer - Jamie Dixon
- Head to Head #2: K. Scott Allen and Jimmy Bogard - Jimmy Bogard, Rob Conery, Scott Allen
- Working Distributed - How Does It Even Work? - Brendan Forster
- Is your code ready for .NET Core? - Mark Rendle
- Don’t Be Dilbert: Survival Tactics for Uninspiring Workplaces - Kylie Hunt
- Deploying Docker Containers on Windows Server 2016 - Ben Hall
- Sunlight & Air: 10 Lessons for Growing Junior Developers - Erika Carlson
- Have I Got NDC for You!
- Elixir Is Neat But What Can You Actually Do With It? - Rob Conery
- .NET Rocks Live: Security Panel
- Fun with Mind Reading: Using EEG and Machine Learning to Perform Lie Detection - Jennifer Marsman
All of them were really helpful and gave me a different perspective on the topics. Aside from those sessions, here are the talks that I really wanted to attend but I couldn't + I will definitely watch:
- F# in the Real World - Yan Cui
- Phoenix a web framework for the new web - José Valim
- Functional web applications using F# and suave - Tomas Jansson
- Functional Programming for the Object Oriented - Øystein Kolsrud
- Agile experiments in Machine Learning with F# - Mathias Brandewinder
- Go - one language you should try - Andrzej Grzesik
- Python: An Amazing Second Language for .NET Developers - Michael Kennedy
- ASP.NET Core 1.0 Deep Dive - David Fowler and Damian Edwards
- ASP.NET Identity 3 - Brock Allen
- ASP.NET Core Kestrel: Adventures in building a fast web server - David Fowler and Damian Edwards
- Authentication & secure API access for native & mobile Applications - Dominick Baier
- Deploying Kubernetes, a Container Cluster Manager - Ben Hall
- .NET Deployment Strategies: the Good, the Bad, and the Ugly - Damian Brady
- Panel: Launching a Software Business
- Becoming a Social Developer - Jeremy Clark
- Safe At Any Speed - Ian Cooper
- Universal web with npm, React and Redux using WebPack - Jake Ginnivan
- Offline Web Applications - Max Stoiber
- Domain-Driven Design: The Good Parts - Jimmy Bogard
- Data Modeling with Document Stores - Martin Esmann
- Lessons from a quarter of a billion breached records - Troy Hunt
- DNS for Developers - Maarten Balliauw
- Continuous Integration for Open Source Projects with Travis CI - Kyle Tyacke
The schedule was full of amazing talks as you can see and there are also some other talks that seem interesting but I will probably not have time to look at:
- Understanding parser combinators: a deep dive - Scott Wlaschin
- Phoenix Channels - a Distributed PubSub and Presence Platform - Sonny Scroggin
- Performance is not an Option- Building services with GRPC and Cassandra - Dave Bechberger
- Building an app in ASP.NET Core and MVC 6 for the Raspberry Pi - Roland Guijt
- App 2.0 - why the web lost and Apps won. - Liam Westley
- Website Fuzziness - Niall Merrigan
- .NET without Windows - Matt Ellis
- Real world Erlang - an honest view - Rob Ashton
- Who’s Afraid of Graphs? - David Ostrovsky
- Simplifying Thread Safety - Andrew Clymer
- Building a Live Programming Tool with Roslyn - Josh Varty
- IoT at home - The solution to all your spare time problems - Karl-Henrik Nilsson
At the conference I gave a talk on Zero Downtime Deployments. I tried to give useful and practical guidance based on my true experiences. I hope it was useful for everyone attended the talk. The recording video of my talk is also available now.
Finally, you can find the demo sample I used in my talk to simulate a zero-downtime deployment process under my GitHub account. It also has great instructions on how to run the sample. So, definitely give it a go.
Next week, I am off to Oslo for one of my favorite conferences: NDC Oslo. I have been to NDC Oslo before in 2014 but this time is a little bit more special as I am one of the speakers this year. Seeing the list of awesome speakers at conference makes me so excited and nervous at the same time :)
As for my topic, I am going to be talking about zero-downtime deployments. I will tell you about some patterns, practices and techniques that make it this challenging task easier, such as semantic versioning and blue/green deployments. We’ll also walk through an end-to-end demo of how a high traffic web application can survive the challenge of deployments. If you are going to be there, I hope you will also join me at my talk.
I can see lots of people from my Twitter timeline already tweeting about NDC Oslo. So, I am really sure that this is going to be an exciting event. I hope to see lots of friends and make new friends while I am there. See you in Oslo :)
.NET Core Runtime RC2 has been released a few days ago along with .NET Core SDK Preview 1. At the same time of .NET Core release, ASP.NET Core RC2 has also been released. Today, I started doing the transition from RC1 to RC2 and I wanted to write about how I am getting each stage done. Hopefully, it will be somewhat useful to you as well. In this post, I want to talk about the tooling aspect of the transition.
Get the dotnet CLI Ready
One of the biggest shift from RC1 and RC2 is the tooling. Before, we had DNVM, DNX and DNU as command line tools. All of them are now gone (RIP). Instead, we have one command line tool: dotnet CLI. First, I installed dotnet CLI on my Ubuntu 14.04 VM by running the following script as explained here:
sudo sh -c 'echo "deb [arch=amd64] https://apt-mo.trafficmanager.net/repos/dotnet/ trusty main" > /etc/apt/sources.list.d/dotnetdev.list' sudo apt-key adv --keyserver apt-mo.trafficmanager.net --recv-keys 417A0893 sudo apt-get update sudo apt-get install dotnet-dev-1.0.0-preview1-002702
This installed dotnet-dev-1.0.0-preview1-002702 package on my machine and I am off to go:
You can also use apt-cache to see all available versions:
Just to make things clear, I deleted ~/.dnx folder entirely to get rid of all RC1 garbage.
Get Visual Studio Code Ready
At this stage, I had the C# extension installed on my VS Code instance on my Ubuntu VM which was only working for DNX based projects. So, I opened up VS Code and updated my C# extension to latest version (which is 1.0.11). After the upgrade, I opened up a project which was on RC1 and VS Code started downloading .NET Core Debugger.
That was a good experience, I didn't dig into how to do that but I am not sure at this point why it didn't come inside the extension itself. There is probably a reason for that but not my priority to dig into right now :)
Try out the Setup
Now, I am ready to blast off with .NET Core. I used dotnet CLI to create a sample project by typing "dotnet new --type=console" and opened up project folder with VS Code. As soon as VS Code is launched, it asked me to set the stage.
Which got me a few new files under .vscode folder.
I jumped into the debug pane, selected the correct option and hit the play button after putting a breakpoint inside the Program.cs file.
Boom! I am in business.
Now, I am moving to code changes which will involve more hair pulling I suppose.
Deployment of the software has been a constant challenge possibly from the very start. It could be a web application, HTTP services, a Play Station app, an application running inside a Raspberry PI. All have the challenges of deploying new changes. It even makes you go down the route of a different architecture to make the deployments scale. One of the big challenges of software deployments is that there is not one generic rule or practice that you can apply and it will all be shinny. This doesn't mean that we don't have generic practices or techniques around it. However, the way your software lives in its destination(s) and the way it's being consumed are factors that you need to take into consideration before coming up with a deployment strategy.
There is a little bit of catch-22 here as you sometimes shape the architecture around the deployment strategy (not saying this is a good or bad way, not exactly sure yet). So, there is a fine balance you need to hit here to grab the robes of success.
So, what I am going to tell you here now might not seem to fit for everyone but I believe that it is the first step to make your deployment strategy a viable one. I will throw it here: your Git repository is your deployment boundary. This idea is so subtle, easy to grasp and easy to reason from. What this really means that all the things you have inside one repository will be part of only one deployment strategy. The main reason for this is versioning. Git allows you to tag your repositories and you use these feature to put you into different deployment pipelines based on you're the type of the change (e.g. minor, patch or major). See my previous post on versioning builds and releases through SemVer for a bit more detail on this. However, it will get messy if you try to maintain multiple different versions inside the same repository for different components. You will easily lose touch on how two components relate to each other over time.
Ask yourself this when you are structuring your repositories: are any of the components inside a repository has a different release cadence? If the question is yes, try to think of why and if the reasons are legitimate, give that components a new, separate home.
Let's start this post by setting the stage first and then move onto the problem. When a build is kicked off for your application/library/etc. on a CI (continuous integration) system like Travis CI or AppVeyor, you are most probably flowing a version number for that build no matter what type of tech stack you use. This is mostly to relate the artifacts, which the build will produce (e.g. Docker images, NuGet packages, .NET assemblies, etc.), with a particular context. This is really useful to be able to communicate and correlate stuff. A few scenarios:
- Hey Mark, please take a look at foobar-1.2.3-rc.657 from our CI Docker registry. That has the issue I have mentioned. You can check it on that image.
- Ow, barfoo-2.2.3-beta.362 NuGet package content misses a few assemblies that should have been there. Let's go back to build logs for this and check what went wrong.
Convinced? Good :) Otherwise, you won't find the rest of the article useful.
The other case is to flow a version number when you actually want to produce a release for your defined environments (e.g. acceptance, staging, production). In this case, you usually don't want to give an arbitrary version to your artifacts because the version will carry the high level information about the changes. There are three important intentions you can give here:
- I am releasing something which has no behavior changes
- I am releasing a new feature which doesn't break my existing consumers
- Dude, brace yourself! I will break the World into half!
You can see Semantic Versioning 2.0.0 for more information about this.
So, what happens here is that we want to let the CI system decide on the version at some cases and take control over which version number to flow in some other cases. Actually, the first statement is not quite correct because you still want to have partial control over what version number to flow for your non-release builds. Here is an example case to highlight what I mean:
- You started developing your application and shipped version 1.0.0.
- Your CI system started flowing prerelease version based on 1.0.0 and also attached the build number to that version (e.g. 1.0.0-beta.54). Notice that it's wrong at this stage because you already shipped v1.0.0. So, it should really be something like 1.0.1-beta-54.
- Now, you are shipping version 1.1.0 as you introduced a new feature.
- After that change, you keep building the software and CI system keeps flowing version 1.1.0 based versions. This is a bit bad as you now don't have the chronological order and version order correlation.
So, what we want here is to assign a version based on the latest release version, which means that you want to have control over this process of assigning a version number. I have seen people having a text file inside the repository to hold the latest release version but that's a bit manual. I assume you kick a release somewhere and you already assign a version at that stage for releases. So, wouldn't it be bad to leverage this?
So, you probably understood my problems here :) Now, let me introduce a few key pieces which will play a role to solve this problem and then later, I will move onto the actual implementation to solve the problem.
Tagging is a feature of Git which allows you to mark specific points in repository's history. As the Git manual also states, people typically use this functionality to mark release points. This is super convenient for our needs here and gets two important things sorted for us:
- A kick-off point for releases. Ultimately, release process will be kicked off when you tag a repository and push that tag to your remote.
- Deciding the base version based on the latest release version.
So, we have the tags. However, it doesn't mean that every tag is a valid version and you can also use Git's tagging feature for some other purposes. This is where SemVer comes into picture and you can safely assume that any tag which is a valid SemVer is for a release. This makes your life so much easier as you can rely on built-in tools like node-semver to help you out (as we will see shortly).
The other thing we have in the mix is to be able to increment the build version after a release. For example, we release version 2.5.6. The next build right after the release should have the version number bigger than 2.5.6. Seems easy as you can just increment the patch version, right? No! 2.5.6-beta is also a valid SemVer. We can go further with 2.5.6-beta.5+736287 which is also a valid SemVer. So, there is a pre-defined spec here and we can again leverage tools like node-semver to work with this domain nicely.
Solution and Bash Implementation
OK, all this information is super useful but how to make it work? Let me walk you through a solution I have introduced recently on a few of the projects I am working on. It's very trivial but that useful at the same time. However, keep in mind that there might be a few things I might have missed as I have been applying this not for a long time. In fact, here might even be better techniques on this that you know. If so, please comment here. I would love to hear them!
I want to example this in two stages and bring them together at the end.
Deciding on a Base Version
When the build is kicked off, one of the first things to do is to decide a base version. This is fairly trivial and here is the flow chart to describe this decision making process:
Here is how the implementation looks like in Bash:
#!/bin/bash baseVersion=0.0.0-0 if semver "ignorethis" $(git tag -l) &>/dev/null then baseVersion=$(semver $((semver $(git tag -l)) | tail -n1) -i prerelease) fi
Keep in mind that I am fairly new to Bash. So, there might be wrong/bad usages here.
To explain what happens here with a bit more details:
- We get all the tags for the repository as a list by running git tag -l
- We pass this list to semver command-line tool to filter the invalid SemVer strings. Notice that there is another parameter we pass to semver here called "ignorethis". It's just there to cover cases when there is no tag so that semver command-line tool can return non-zero exit code.
- If semver command-line tool exits with 0, we know that there is at least one tag which is a valid SemVer. So, we run tail -n1 on the semver output to retrieve the latest version and we increment it on its prerelease identifier. This is now our base version.
- If there are no valid SemVer tags on the repository, we set 0.0.0-0 as the base version.
Decide on a Build Version
Now we have a base version and we now need to decide on a build version based on that. This is a bit more involved but again, very trivial to implement. Here is another flow chart to describe this decision making process:
And, here is how the implementation looks like in Bash (specific to Travis CI as it uses Travis CI specific environment variables):
if [ -z "$TRAVIS_TAG" ]; then if [ -z "$TRAVIS_BRANCH" ]; then # can add the build metadata to indicate this is pull request build echo export PROJECT_BUILD_VERSION="$baseVersion.$TRAVIS_BUILD_NUMBER"; else # can add the build metadata to indicate this is a branch build echo export PROJECT_BUILD_VERSION="$baseVersion.$TRAVIS_BUILD_NUMBER"; fi else if ! semver $TRAVIS_TAG &>/dev/null then # can add the build metadata to indicate this is a tag build which is not a SemVer echo export PROJECT_BUILD_VERSION="$baseVersion.$TRAVIS_BUILD_NUMBER"; else echo export PROJECT_BUILD_VERSION=$(semver $TRAVIS_TAG); fi fi
Notice that I am echoing commands rather than directly calling them. This is because of a fact that Travis CI doesn't flow the exports which happens inside a script file. Maybe it does but I was not able to get it working. Anyways, I am calling this script inside my .travis.yml file by evaluating the output like this: eval $(./scripts/set-build-version.sh)
I am not going to separately explain how this works as the flow chart is very easy to grasp (also the Bash script). However, one thing which is worth mentioning is the branch check. After we check if the build is for a branch, we do the same operation no matter what. This is OK for my use case but you can add special metadata to your version in order to indicate which branch the build has happened or whether it was a pull request.
I find this solution very straight forward to pick the version of the build and have a central way of kicking of a release process. I applied this on AspNetCore.Identity.MongoDB project, a MongoDB data store adapter for ASP.NET Core identity. You can also see how I am setting the build version, how I am using it and how I am kicking off a release process.
To bring everything together, here is the entire script to set the build version:
#!/bin/bash baseVersion=0.0.0-0 if semver "ignorethis" $(git tag -l) &>/dev/null then baseVersion=$(semver $((semver $(git tag -l)) | tail -n1) -i prerelease) fi if [ -z "$TRAVIS_TAG" ]; then if [ -z "$TRAVIS_BRANCH" ]; then # can add the build metadata to indicate this is pull request build echo export PROJECT_BUILD_VERSION="$baseVersion.$TRAVIS_BUILD_NUMBER"; else # can add the build metadata to indicate this is a branch build echo export PROJECT_BUILD_VERSION="$baseVersion.$TRAVIS_BUILD_NUMBER"; fi else if ! semver $TRAVIS_TAG &>/dev/null then # can add the build metadata to indicate this is a tag build which is not a SemVer echo export PROJECT_BUILD_VERSION="$baseVersion.$TRAVIS_BUILD_NUMBER"; else echo export PROJECT_BUILD_VERSION=$(semver $TRAVIS_TAG); fi fi
I hope this will be useful to you in some way and as said, if you have a similar technique or a practice that you apply for this case, please share it. Now, go and enjoy this spectacular weekend ;)