.NET Core Runtime RC2 has been released a few days ago along with .NET Core SDK Preview 1. At the same time of .NET Core release, ASP.NET Core RC2 has also been released. Today, I started doing the transition from RC1 to RC2 and I wanted to write about how I am getting each stage done. Hopefully, it will be somewhat useful to you as well. In this post, I want to talk about the tooling aspect of the transition.
Get the dotnet CLI Ready
One of the biggest shift from RC1 and RC2 is the tooling. Before, we had DNVM, DNX and DNU as command line tools. All of them are now gone (RIP). Instead, we have one command line tool: dotnet CLI. First, I installed dotnet CLI on my Ubuntu 14.04 VM by running the following script as explained here:
sudo sh -c 'echo "deb [arch=amd64] https://apt-mo.trafficmanager.net/repos/dotnet/ trusty main" > /etc/apt/sources.list.d/dotnetdev.list' sudo apt-key adv --keyserver apt-mo.trafficmanager.net --recv-keys 417A0893 sudo apt-get update sudo apt-get install dotnet-dev-1.0.0-preview1-002702
This installed dotnet-dev-1.0.0-preview1-002702 package on my machine and I am off to go:
You can also use apt-cache to see all available versions:
Just to make things clear, I deleted ~/.dnx folder entirely to get rid of all RC1 garbage.
Get Visual Studio Code Ready
At this stage, I had the C# extension installed on my VS Code instance on my Ubuntu VM which was only working for DNX based projects. So, I opened up VS Code and updated my C# extension to latest version (which is 1.0.11). After the upgrade, I opened up a project which was on RC1 and VS Code started downloading .NET Core Debugger.
That was a good experience, I didn't dig into how to do that but I am not sure at this point why it didn't come inside the extension itself. There is probably a reason for that but not my priority to dig into right now :)
Try out the Setup
Now, I am ready to blast off with .NET Core. I used dotnet CLI to create a sample project by typing "dotnet new --type=console" and opened up project folder with VS Code. As soon as VS Code is launched, it asked me to set the stage.
Which got me a few new files under .vscode folder.
I jumped into the debug pane, selected the correct option and hit the play button after putting a breakpoint inside the Program.cs file.
Boom! I am in business.
Now, I am moving to code changes which will involve more hair pulling I suppose.
Deployment of the software has been a constant challenge possibly from the very start. It could be a web application, HTTP services, a Play Station app, an application running inside a Raspberry PI. All have the challenges of deploying new changes. It even makes you go down the route of a different architecture to make the deployments scale. One of the big challenges of software deployments is that there is not one generic rule or practice that you can apply and it will all be shinny. This doesn't mean that we don't have generic practices or techniques around it. However, the way your software lives in its destination(s) and the way it's being consumed are factors that you need to take into consideration before coming up with a deployment strategy.
There is a little bit of catch-22 here as you sometimes shape the architecture around the deployment strategy (not saying this is a good or bad way, not exactly sure yet). So, there is a fine balance you need to hit here to grab the robes of success.
So, what I am going to tell you here now might not seem to fit for everyone but I believe that it is the first step to make your deployment strategy a viable one. I will throw it here: your Git repository is your deployment boundary. This idea is so subtle, easy to grasp and easy to reason from. What this really means that all the things you have inside one repository will be part of only one deployment strategy. The main reason for this is versioning. Git allows you to tag your repositories and you use these feature to put you into different deployment pipelines based on you're the type of the change (e.g. minor, patch or major). See my previous post on versioning builds and releases through SemVer for a bit more detail on this. However, it will get messy if you try to maintain multiple different versions inside the same repository for different components. You will easily lose touch on how two components relate to each other over time.
Ask yourself this when you are structuring your repositories: are any of the components inside a repository has a different release cadence? If the question is yes, try to think of why and if the reasons are legitimate, give that components a new, separate home.
Let's start this post by setting the stage first and then move onto the problem. When a build is kicked off for your application/library/etc. on a CI (continuous integration) system like Travis CI or AppVeyor, you are most probably flowing a version number for that build no matter what type of tech stack you use. This is mostly to relate the artifacts, which the build will produce (e.g. Docker images, NuGet packages, .NET assemblies, etc.), with a particular context. This is really useful to be able to communicate and correlate stuff. A few scenarios:
- Hey Mark, please take a look at foobar-1.2.3-rc.657 from our CI Docker registry. That has the issue I have mentioned. You can check it on that image.
- Ow, barfoo-2.2.3-beta.362 NuGet package content misses a few assemblies that should have been there. Let's go back to build logs for this and check what went wrong.
Convinced? Good :) Otherwise, you won't find the rest of the article useful.
The other case is to flow a version number when you actually want to produce a release for your defined environments (e.g. acceptance, staging, production). In this case, you usually don't want to give an arbitrary version to your artifacts because the version will carry the high level information about the changes. There are three important intentions you can give here:
- I am releasing something which has no behavior changes
- I am releasing a new feature which doesn't break my existing consumers
- Dude, brace yourself! I will break the World into half!
You can see Semantic Versioning 2.0.0 for more information about this.
So, what happens here is that we want to let the CI system decide on the version at some cases and take control over which version number to flow in some other cases. Actually, the first statement is not quite correct because you still want to have partial control over what version number to flow for your non-release builds. Here is an example case to highlight what I mean:
- You started developing your application and shipped version 1.0.0.
- Your CI system started flowing prerelease version based on 1.0.0 and also attached the build number to that version (e.g. 1.0.0-beta.54). Notice that it's wrong at this stage because you already shipped v1.0.0. So, it should really be something like 1.0.1-beta-54.
- Now, you are shipping version 1.1.0 as you introduced a new feature.
- After that change, you keep building the software and CI system keeps flowing version 1.1.0 based versions. This is a bit bad as you now don't have the chronological order and version order correlation.
So, what we want here is to assign a version based on the latest release version, which means that you want to have control over this process of assigning a version number. I have seen people having a text file inside the repository to hold the latest release version but that's a bit manual. I assume you kick a release somewhere and you already assign a version at that stage for releases. So, wouldn't it be bad to leverage this?
So, you probably understood my problems here :) Now, let me introduce a few key pieces which will play a role to solve this problem and then later, I will move onto the actual implementation to solve the problem.
Tagging is a feature of Git which allows you to mark specific points in repository's history. As the Git manual also states, people typically use this functionality to mark release points. This is super convenient for our needs here and gets two important things sorted for us:
- A kick-off point for releases. Ultimately, release process will be kicked off when you tag a repository and push that tag to your remote.
- Deciding the base version based on the latest release version.
So, we have the tags. However, it doesn't mean that every tag is a valid version and you can also use Git's tagging feature for some other purposes. This is where SemVer comes into picture and you can safely assume that any tag which is a valid SemVer is for a release. This makes your life so much easier as you can rely on built-in tools like node-semver to help you out (as we will see shortly).
The other thing we have in the mix is to be able to increment the build version after a release. For example, we release version 2.5.6. The next build right after the release should have the version number bigger than 2.5.6. Seems easy as you can just increment the patch version, right? No! 2.5.6-beta is also a valid SemVer. We can go further with 2.5.6-beta.5+736287 which is also a valid SemVer. So, there is a pre-defined spec here and we can again leverage tools like node-semver to work with this domain nicely.
Solution and Bash Implementation
OK, all this information is super useful but how to make it work? Let me walk you through a solution I have introduced recently on a few of the projects I am working on. It's very trivial but that useful at the same time. However, keep in mind that there might be a few things I might have missed as I have been applying this not for a long time. In fact, here might even be better techniques on this that you know. If so, please comment here. I would love to hear them!
I want to example this in two stages and bring them together at the end.
Deciding on a Base Version
When the build is kicked off, one of the first things to do is to decide a base version. This is fairly trivial and here is the flow chart to describe this decision making process:
Here is how the implementation looks like in Bash:
#!/bin/bash baseVersion=0.0.0-0 if semver "ignorethis" $(git tag -l) &>/dev/null then baseVersion=$(semver $((semver $(git tag -l)) | tail -n1) -i prerelease) fi
Keep in mind that I am fairly new to Bash. So, there might be wrong/bad usages here.
To explain what happens here with a bit more details:
- We get all the tags for the repository as a list by running git tag -l
- We pass this list to semver command-line tool to filter the invalid SemVer strings. Notice that there is another parameter we pass to semver here called "ignorethis". It's just there to cover cases when there is no tag so that semver command-line tool can return non-zero exit code.
- If semver command-line tool exits with 0, we know that there is at least one tag which is a valid SemVer. So, we run tail -n1 on the semver output to retrieve the latest version and we increment it on its prerelease identifier. This is now our base version.
- If there are no valid SemVer tags on the repository, we set 0.0.0-0 as the base version.
Decide on a Build Version
Now we have a base version and we now need to decide on a build version based on that. This is a bit more involved but again, very trivial to implement. Here is another flow chart to describe this decision making process:
And, here is how the implementation looks like in Bash (specific to Travis CI as it uses Travis CI specific environment variables):
if [ -z "$TRAVIS_TAG" ]; then if [ -z "$TRAVIS_BRANCH" ]; then # can add the build metadata to indicate this is pull request build echo export PROJECT_BUILD_VERSION="$baseVersion.$TRAVIS_BUILD_NUMBER"; else # can add the build metadata to indicate this is a branch build echo export PROJECT_BUILD_VERSION="$baseVersion.$TRAVIS_BUILD_NUMBER"; fi else if ! semver $TRAVIS_TAG &>/dev/null then # can add the build metadata to indicate this is a tag build which is not a SemVer echo export PROJECT_BUILD_VERSION="$baseVersion.$TRAVIS_BUILD_NUMBER"; else echo export PROJECT_BUILD_VERSION=$(semver $TRAVIS_TAG); fi fi
Notice that I am echoing commands rather than directly calling them. This is because of a fact that Travis CI doesn't flow the exports which happens inside a script file. Maybe it does but I was not able to get it working. Anyways, I am calling this script inside my .travis.yml file by evaluating the output like this: eval $(./scripts/set-build-version.sh)
I am not going to separately explain how this works as the flow chart is very easy to grasp (also the Bash script). However, one thing which is worth mentioning is the branch check. After we check if the build is for a branch, we do the same operation no matter what. This is OK for my use case but you can add special metadata to your version in order to indicate which branch the build has happened or whether it was a pull request.
I find this solution very straight forward to pick the version of the build and have a central way of kicking of a release process. I applied this on AspNetCore.Identity.MongoDB project, a MongoDB data store adapter for ASP.NET Core identity. You can also see how I am setting the build version, how I am using it and how I am kicking off a release process.
To bring everything together, here is the entire script to set the build version:
#!/bin/bash baseVersion=0.0.0-0 if semver "ignorethis" $(git tag -l) &>/dev/null then baseVersion=$(semver $((semver $(git tag -l)) | tail -n1) -i prerelease) fi if [ -z "$TRAVIS_TAG" ]; then if [ -z "$TRAVIS_BRANCH" ]; then # can add the build metadata to indicate this is pull request build echo export PROJECT_BUILD_VERSION="$baseVersion.$TRAVIS_BUILD_NUMBER"; else # can add the build metadata to indicate this is a branch build echo export PROJECT_BUILD_VERSION="$baseVersion.$TRAVIS_BUILD_NUMBER"; fi else if ! semver $TRAVIS_TAG &>/dev/null then # can add the build metadata to indicate this is a tag build which is not a SemVer echo export PROJECT_BUILD_VERSION="$baseVersion.$TRAVIS_BUILD_NUMBER"; else echo export PROJECT_BUILD_VERSION=$(semver $TRAVIS_TAG); fi fi
I hope this will be useful to you in some way and as said, if you have a similar technique or a practice that you apply for this case, please share it. Now, go and enjoy this spectacular weekend ;)
This is yet another post from me on a tiny but a very useful tool. Last time, I have written about SqlLocalDB.exe Utility Tool which is a command-line tool to manage SQL Server Express LocalDB instances. Today, I want to tell you about node-semver, a SemVer 2.0.0 parser command-line tool and a node.js library (the one that npm uses). I am huge fan of SemVer. You cannot imagine how many places you can you use semantic versioning to make the internal and external communication easy for your software product. I plan to write more about a few different cases I figured out.
Going back to our topic, the acquisition of this tool is amazingly simple. You basically install this as a global npm package by running "npm install semver -g" and from there, you have access to semver command everywhere.
There are a lot of great things you can do with this but I will give you a few of the basic and useful samples. First useful thing I noticed was the ability to validate a string against a SemVer 2.0.0 spec.
If the validation fails, it exists with non-zero exit code.
You can also increment a version:
By default, it increments the patch version but you can customize this if you want:
One useful feature that I have been using is to giving bunch of string values to node-semver and getting back the valid semantic version strings back:
Notice that it doesn't only sanitize the input, it also orders the versions on the standard output which is very convenient. I have an upcoming blog post where I will show you how I leverage this feature and a few others for a legitimate use case.
Lastly, I want to give you an example on its range feature which allows you to check if a version (or which versions if multiple versions are given) satisfies a range check:
So, did you like it? I hear you screaming "YES!" :) So, go spread this amazing tiny utility by sharing this blog post. There are more but these examples should be enough to give you an idea.
Two weeks ago, I had an amazing opportunity to be at Microsoft Build Conference in San Francisco as an attendee thanks to my amazing company Redgate. The experience was truly unique and amount of people I have met there was huge. A bit late but I would like to share my experience about the conference with you in this post by highlighting what has happened and giving you my personal takeaways. You can also check out my tweets for the Build conference.
There were bunch of big and small announcements throughout the conference from Microsoft. Some of these were highlighted during two keynotes and some other announcements were spread to three days. I tried to capture all of them here but it's very likely I missed some of them (mostly the small ones):
- Running Bash on Ubuntu on Windows has been announced. This is not something like Cygwin or anything. It basically runs Ubuntu on Windows without the Unix kernel. So, the things we all love like apt-get works as you expect it to work. This is a new developer feature included in a Windows 10 "Anniversary" update which is coming soon (as they say). You can watch Running Bash on Ubuntu on Windows video, check Windows Command Line Improvements session from Build 2016 or Scott Hanselman's Channel 9 interview on Linux Command Line on Windows.
- Xamarin in Visual Studio is now available at no extra cost. Xamarin will be in every edition of Visual Studio, including the widely-available Visual Studio Community Edition, which is free for individual developers, open source projects, academic research, education, and small professional teams. You can also read Mobile App Development made easy with Visual Studio and Xamarin post up on VS blog.
- There is now an iPhone simulator on Windows which comes with Xamarin. This still requires a Mac to build the project AFAIK.
- Mono Project is now part of the .NET Foundation and is now under MIT License.
- Visual Studio 2015 Update 2 RTM has been made available to public. You can read more about this on VS Blog.
- Visual Studio "15" Preview has been made available to public. This is different than VS 2015 and has a very straight forward install experience. I recommend watching The Future of Visual Studio session or The Future of Visual Studio Channel 9 live interview from Build 2016 or read Visual Studio “15” Preview Now Available blog post if you want to learn more.
- Related to both VS 2015 Update 2 and VS 15 Preview, see What’s New for C# and VB in Visual Studio post on .NET Blog. You will see mostly features that ReSharper had for long time now.
- Microsoft announced its Bot Framework.
- The HoloLens SDK and emulator is now live.
- Cognitive Services has been announced. This service is exposing several machine learning algorithms as a service and allows you to hook them into your apps through several HTTP based APIs like Computer Vision API. Some portions of this service was known as Project Oxford which was the codename. Try out https://www.captionbot.ai/ which is built on top of these APIs.
- Azure Service Fabric has stepped outside the preview and is now GA. Microsoft has given a lot of attention to this service throughout the conference, including the keynote (starting from 0:52:42). You can also check out Azure Service Fabric for Developers and Building MicroServices with Service Fabric sessions from Build 2016 for more deep info on this.
- Virtual Machine Scale Sets is now GA.
- Azure released a new service called Azure Functions. Azure Functions introduces an event driven, compute-on-demand experience that builds on Azure’s market leading PaaS platform. This is Azure's offering which is very similar to AWS Lambda. You can read Introducing Azure Functions blog post and Introducing Azure Functions session from Build 2016 for more info.
- Several announcements has been made for DocumentDB, Microsoft’s hosted NoSQL offering on Azure.
- Protocol support for MongoDB: Applications can now easily and transparently communicate with DocumentDB using existing, community supported Apache MongoDB APIs and drivers. The DocumentDB protocol support for MongoDB is now in preview. This is a very strategic move to make up for the lack of standalone local installation option which is the most wanted feature for DocumentDB.
- Global databases: DocumentDB global databases are now available in preview, visit the sign-up page for more information.
- Updated pricing and scale options.
- Azure IoT Starter Kits have been released. There were many other stuff on IoT part. You can see the official post on Azure Blog for them.
- Power BI Embedded has been made available as a preview service.
- Announcing the publication of Parse Server with Azure Managed Services
- Announcing the .NET Framework 4.6.2 Preview
- Windows 10 Anniversary SDK is bringing exciting opportunities to developers
Here is the list of sessions I have attended:
- First Day Keynote
- The Future of Visual Studio: It was a great session to watch even if there were demo failures, which were understandable considering the state of the product. Debugging on Ubuntu through VS Preview 15 was my favorite of the session.
- What's New in TypeScript?: It's always interesting and useful to watch Andres talking about languages. On this session, the part where he talk about non-nullable support on TypeScript was really beneficial.
- Building Resilient Services: Learning Lessons from Azure with Mark Russinovich: This is like a tradition for build. Mark Rus. talked about architectural and design decisions on Azure platform, mostly driven from real use cases.
- Second Day Keynote
- .NET Overview: Very valuable session to understand the future of .NET.
- Introducing ASP.NET Core 1.0: Very basic intro on ASP.NET Core 1.0. It is useful to watch if you have no idea what is coming up on ASP.NET Core 1.0. Check the docs, too.
- Enhancing Your Application with Machine Learning Through APIs
- Visual Studio 2015 Extensibility: Mads Kristensen has built a VS extension, properly open sourced it and published it on the stage: https://github.com/madskristensen/FileDiffer
- Azure Data Lake and Azure Data Warehouse: Applying Modern Practices to Your App: This was a really vague, hard-to-follow session but I plan to watch it again. I find messaging on Data Lake a bit confusing.
- Deploying ASP.NET Core Applications: Enjoyed this a lot, Dan laid out several cases in terms of deployment of ASP.NET Core applications.
As much as I wanted to attend some other sessions, I missed some of them mostly due to clashes with other sessions. Luckily recordings for all Build 2016 sessions are available up on Channel 9. Here is my list of sessions to catch up:
- The Future of C#: This was really entertaining and informative to watch and see what the future of C# might bring.
- Building Apps in Azure Using Windows and Linux VMs: Really good overall picture on the current state of Azure IaaS
- Cross-Platform Mobile with Xamarin
- Delivering Applications at Scale with DocumentDB, Azure's NoSQL Document Database
- Microsoft Cognitive Services: Build smarter and more engaging experiences
- Windows 10 IoT Core: From Maker to Market
- Setting the Stage: The Application Platform in Windows Server 2016
- Introducing Azure Functions
- ASP.NET Core Deep Dive into MVC
- Advanced Analytics with R and SQL
- Using the Right Networking API for Your UWP App
- SQL Database Technologies for Developers
- Building Applications Using the Azure Container Service
There were also many good Channel 9 Live interviews. You can find them here. Here is a personal list of a few which are worth listening to:
- Anders Live
- Jeffrey Snover on Azure Stack, Powershell, and Nano Server
- The Future of .NET Languages
- Linux Command Line on Windows
- The Future of Visual Studio
- Moving Forward with ASP.NET
- Docker Tooling for Visual Studio and ASP.NET Core
- Xamarin Live
- Mark Russinovich Live
- Data Science on Azure
- International Space Station Project
All in all it has been a great conference and as stated, I am still catching up on the stuff that I have missed. Throughout the conference, I have picked up a few key points and I want to end the post with those:
- I have seen more from Microsoft to make developers lives easier and more productive by enabling new tools (Bash on Ubuntu on Windows), supporting multiple platforms (Service Fabric to run on every environment including AWS, on-premises, Azure Stack and preview of Service Fabric on Linux), open sourcing more (some parts of Xamarin have gone open source) and making existing paid tools available for free (Xamarin is now free).
- Microsoft is more focused on getting their existing services together and trying to give a cohesive ecosystem for developers. Service Fabric, Cognitive Services, Data Lake is a few examples of this.
- .NET Core and CoreCLR is approaching to finalization for v1. After RC2, I don't suppose there will be much more features added or concepts changing.
- I think this is the first time I have seen stabilization on client Apps story for Microsoft. Universal Windows Platform (UWP) was the focus on this area this year and it was the same on previous year.
- I am absolutely happy to see Microsoft abandoning Windows Phone day by day. There was no direct sessions on it during the conference.
- There were more steps towards making software to manage people's lives in a better way. Skype Bot Framework was one of these steps.
- Microsoft (mostly Azure group) invests on IoT solutions heavily. Azure Functions and new updates on Azure IoT Suite are just a few signs of this.
- Azure Resource Manager (ARM) and ARM templates are getting a lot of love from Microsoft and it's the way they push forward. They even build new services on Azure on top of this.