Sorted By: Tag (c-sharp)

First Hours with Visual Studio Code on Mac and Windows

Today is one of those awesome days if you build stuff on .NET platform. They announced bunch of stuff during Build 2015 keynote and one of them is Visual Studio Code, a free and stripped down version of Visual Studio which works on Mac OS X, Linux and Windows. Let me give you my highlights in this short blog post :)
2015-04-29 21:15
Tugberk Ugurlu


Today is one of those awesome days if you are building stuff on .NET platform. Microsoft announced bunch of stuff at Build 2015 keynote a few hours ago and one of them is Visual Studio Code, a free and stripped down version of Visual Studio which works on Mac OS X, Linux and Windows. It leverages bunch of existing open source software like OmniSharp, Electron. Most of all, this was my #bldwin wish :)

First of all, you should definitely install Visual Studio Code and start checking the documentation which is very extensive. I followed those steps and as I am very excited about this new tool, I wanted to share my experience thus far which is not much but very promising.

First thing I noticed was the top notch support for ASP.NET 5. The documentation for ASP.NET 5 support is pretty good but some features are not highlighted there. For example, you are getting the IntelliSense for dependencies:

Ekran Resmi 2015-04-29 19.26.46

When you add a dependency, you get nice notification telling that you should restore:

Ekran Resmi 2015-04-29 19.59.55

Pretty nice! So, how would you restore? Hit ⇧⌘P to get the command pallet up and you can see the restore command there:

Ekran Resmi 2015-04-29 21.45.20

It will run the restore inside the terminal:

Ekran Resmi 2015-04-29 21.48.30

You can also invoke your commands defined inside the project.json:

Ekran Resmi 2015-04-29 20.09.09

Ekran Resmi 2015-04-29 20.10.14

Obviously, you can change the theme.

Ekran Resmi 2015-04-29 20.18.35

Writing C# code is also very slick! You currently don’t have all the nice refactoring features you have in full fledged Visual Studio but it’s still impressive:

Ekran Resmi 2015-04-29 20.19.57

Ekran Resmi 2015-04-29 20.22.43

We even have some advanced stuff like Peek Definition:

Ekran Resmi 2015-04-29 20.02.29

Check out the documentation for all coding editor features.

As mentioned Windows is also fully supported as you might guess :)

Screenshot 2015-04-29 20.35.16

Screenshot 2015-04-29 21.18.29

Screenshot 2015-04-29 21.23.36

I want to touch on the Git integration as well. I generally use Git bash and this won’t change for me but having the diff view inside the editor in a very nice way is priceless!

image

How about old/current .NET applications? I managed to get one up and running easily and managed to get the build working by defining a task for that:

{
	"version": "0.1.0",
	
	// The command is tsc.
	"command": "msbuild",

	// Show the output window only if unrecognized errors occur. 
	"showOutput": "silent",
	
	// Under windows use tsc.exe. This ensures we don't need a shell.
	"windows": {
		"command": "C:\\Program Files (x86)\\MSBuild\\14.0\\Bin\\msbuild.exe"
	},
	
	// args is the HelloWorld program to compile.
	"args": []
}

image

I was expecting this to work without any further configurations but it could be just me not being able to get it working.

As said, it’s very early but I am sold for this editor! Also, this is a fantastic time to build products on .NET platform. I would like to thank all the people at Microsoft and open source community who are making our lives easier and enjoyable. I will leave you all now and enjoy my new toy! :O

Compiling C# Code Into Memory and Executing It with Roslyn

Let me show you how to compile a piece of C# code into memory and execute it with Roslyn. It is super easy if you believe it or not :)
2015-03-31 20:39
Tugberk Ugurlu


For the last couple of days, I have been looking into how to get Razor view engine running outside ASP.NET 5 MVC. It was fairly straight forward but there are a few bits and pieces that you need to stitch together which can be challenging. I will get Razor part in a later post and in this post, I would like to show how to compile a piece of C# code into memory and execute it with Roslyn, which was one of the parts of getting Razor to work outside ASP.NET MVC.

First thing is to install C# code analysis library into you project though NuGet. In other words, installing Roslyn :)

Install-Package Microsoft.CodeAnalysis.CSharp -pre

This will pull down bunch of stuff like Microsoft.CodeAnalysis.Analyzers, System.Collections.Immutable, etc. as its dependencies which is OK. In order to compile the code, we want to first create a SyntaxTree instance. We can do this pretty easily by parsing the code block using the CSharpSyntaxTree.ParseText static method.

SyntaxTree syntaxTree = CSharpSyntaxTree.ParseText(@"
    using System;

    namespace RoslynCompileSample
    {
        public class Writer
        {
            public void Write(string message)
            {
                Console.WriteLine(message);
            }
        }
    }");

The next step is to create a Compilation object. If you wonder, the compilation object is an immutable representation of a single invocation of the compiler (code comments to the rescue). It is the actual bit which carries the information about syntax trees, reference assemblies and other important stuff which you would usually give as information to the compiler. We can create an instance of a Compilation object through another static method: CSharpCompilation.Create.

string assemblyName = Path.GetRandomFileName();
MetadataReference[] references = new MetadataReference[]
{
    MetadataReference.CreateFromFile(typeof(object).Assembly.Location),
    MetadataReference.CreateFromFile(typeof(Enumerable).Assembly.Location)
};

CSharpCompilation compilation = CSharpCompilation.Create(
    assemblyName,
    syntaxTrees: new[] { syntaxTree },
    references: references,
    options: new CSharpCompilationOptions(OutputKind.DynamicallyLinkedLibrary));

Hard part is now done. The final bit is actually running the compilation and getting the output (in our case, it is a dynamically linked library). To run the actual compilation, we will use the Emit method on the Compilation object. There are a few overloads of this method but we will use the one where we can pass a Stream object in and make the Emit method write the assembly bytes into it. Emit method will give us an instance of an EmitResult object and we can pull the status of the compilation, warnings, failures, etc. from it. Here is the actual code:

using (var ms = new MemoryStream())
{
    EmitResult result = compilation.Emit(ms);

    if (!result.Success)
    {
        IEnumerable<Diagnostic> failures = result.Diagnostics.Where(diagnostic => 
            diagnostic.IsWarningAsError || 
            diagnostic.Severity == DiagnosticSeverity.Error);

        foreach (Diagnostic diagnostic in failures)
        {
            Console.Error.WriteLine("{0}: {1}", diagnostic.Id, diagnostic.GetMessage());
        }
    }
    else
    {
        ms.Seek(0, SeekOrigin.Begin);
        Assembly assembly = Assembly.Load(ms.ToArray());
    }
}

As mentioned before, here, we are getting the EmitResult out as a result and looking for its status. If it’s not a success, we get the errors out and output them. If it’s a success, we load the bytes into an Assembly object. The Assembly object you have here is no different the ones that you are used to. From this point on, it’s all up to your ninja reflection skills in order to execute the compiled code. For the purpose of this demo, it was as easy as the below code:

Type type = assembly.GetType("RoslynCompileSample.Writer");
object obj = Activator.CreateInstance(type);
type.InvokeMember("Write",
    BindingFlags.Default | BindingFlags.InvokeMethod,
    null,
    obj,
    new object[] { "Hello World" });

This was in a console application and after running the whole thing, I got the expected result:

image

Pretty sweet and easy! This sample is up on GitHub if you are interested.

Dependency Injection: Inject Your Dependencies, Period!

Reasons on why I prefer dependency injection over static accessors.
2014-11-18 12:39
Tugberk Ugurlu


There is a discussion going on inside ASP.NET/Logging repository whether we should have static Logger available everywhere or not. I am quite against it and I stated why with a few comments there but hopefully with this post, I will try to address why I think the dependency injection path is better for most of the cases.

Let’s take an example and explain it further based on that.

public class FooManager : IFooManager
{
    private readonly IClock _clock;
    private readonly ILogger _logger;
    private readonly IEnvironmentInfo _environmentInfo;

    public FooManager(IClock clock, ILogger logger, IEnvironmentInfo environmentInfo)
    {
        _clock = clock;
        _logger = logger;
        _environmentInfo = environmentInfo;
    }

    // ...
}

This is a simple class  whose job is not important in our context. The way I see this class inside Visual Studio is as below:

image

In any other text editor:

image

Inside Visual Studio when I am writing tests:

Screenshot 2014-11-18 13.07.38

There are two things fundamental things why I would like this approach.

The Requirements of the Component is Exposed Clearly

When I look at this class inside any editor, I can say that this class cannot function properly without IClock, ILogger and IEnvironmentInfo implementation instances as there is no other constructor (preferably, I would also do null checks for constructor parameters but I skipped that to keep the example clean). Instead of the above implementation, imagine that I have the following one:

public class FooManager : IFooManager
{
    public void Run()
    {
        if(DateTime.UtcNow > EnvironmentInfo.Instance.LicanceExpiresInUtc)
        {
            Logger.Instance.Log("Cannot run as licance has expired.");
        }

        // Do other stuff here...
    }
}

With this approach, we are relying on static instances of the components (and they are possibly thread-safe and singleton). What is wrong with this approach? I’m not sure what’s the general idea for this but here are my reasons.

First thing for me to do when I open up a C# code file inside visual studio is to press CTRL+M, O to get an idea about the component. It has been kind of an habit for me. Here how it looks like when I do that for this class:

image

I have no idea what this class needs to function properly. I have also no idea what type of environmental context it relies on. Please note that this issue is not that big of a problem for a class which is this simple but I imagine that your component will have a few other methods, possible other interface implementations, private fields, etc.

I Don’t Need to Look at the Implementation to See What It is Using

When I try to construct the class inside a test method in Visual Studio, I can super easily see what I need to supply to make this class function the way I want. If you are using another editor which doesn’t have a tooling support to show you constructor parameters, you are still safe as you would get compile errors if you don’t supply any required parameters. With the above static instance approach, however, you are on your own with this issue in my opinion. It doesn’t have any constructor parameters for you to easily see what it needs. It relies on static instances which are hard and dirty to mock. For me, it’s horrible to see a C# code written like that.

When you try to write a test against this class now, here is what it looks like if you are inside Visual Studio:

Screenshot 2014-11-18 14.17.13

You have no idea what it needs. F12 and try to figure out what it needs by looking at the implementation and good luck with mocking the static read-only members and DateTime.Now :)

I’m intentionally skipping why you shouldn’t use DateTime.Now directly inside your library (even, the whole DateTime API). The reasons are variable depending on the context. However, here are a few further readings for you on this subject:

It is Still Bad Even If It is not Static

Yes, it’s still bad. Let me give you a hint: Service Locator Pattern.

Should I await on Task.FromResult Method Calls?

Task class has a static method called FromResult which returns an already completed (at the RanToCompletion status) Task object. I have seen a few developers "await"ing on Task.FromResult method call and this clearly indicates that there is a misunderstanding here. I'm hoping to clear the air a bit with this post.
2014-02-24 21:14
Tugberk Ugurlu


Task class has a static method called FromResult which returns an already completed (at the RanToCompletion status) Task object. I have seen a few developers "await"ing on Task.FromResult method call and this clearly indicates that there is a misunderstanding here. I'm hoping to clear the air a bit with this post.

What is the use of Task.FromResult method?

Imagine a situation where you are implementing an interface which has the following signature:

public interface IFileManager
{
     Task<IEnumerable<File>> GetFilesAsync();
}

Notice that the method is Task returning which allows you to make the return expression represent an ongoing operation and also allows the consumer of this method to call this method in an asynchronous manner without blocking (of course, if the underlying layer supports it). However, depending on the case, your operation may not be asynchronous. For example, you may just have the files inside an in memory collection and want to return it from there, or you can perform an I/O operation to retrieve the files list asynchronously from a particular data store for the first time and cache the results there so that you can just return it from the in-memory cache for the upcoming calls. These are just some scenarios where you need to return a successfully completed Task object. Here is how you can achieve that without the help of Task.FromResult method:

public class InMemoryFileManager : IFileManager
{
    IEnumerable<File> files = new List<File>
    {
        //...
    };

    public Task<IEnumerable<File>> GetFilesAsync()
    {
        var tcs = new TaskCompletionSource<IEnumerable<File>>();
        tcs.SetResult(files);

        return tcs.Task;
    }
}

We here used the TaskCompletionSource to produce a successfully completed Task object with the result. Therefore, the caller of the method will immediately have the result. This was what we had been doing till the introduction of .NET 4.5. If you are on .NET 4.5 or above, you can just use the Task.FromResult to perform the same operation:

public class InMemoryFileManager : IFileManager
{
    IEnumerable<File> files = new List<File>
    {
        //...
    };

    public Task<IEnumerable<File>> GetFilesAsync()
    {
        return Task.FromResult<IEnumerable<File>>(files);
    }
}

Should I await Task.FromResult method calls?

TL;DR version of the answer: absolutely not! If you find yourself in need to using Task.FromResult, it's clear that you are not performing any asynchronous operation. Therefore, just return the Task from the Task.FromResult output. Is it dangerous to do this? Not completely but it's illogical and has a performance effect.

Long version of the answer is a bit more in depth. Let's first see what happens when you "await" on a method which matches the pattern:

IEnumerable<File> files = await fileManager.GetFilesAsync();

This code will be read by the compiler as follows (well, in a simplest way):

var $awaiter = fileManager.GetFilesAsync().GetAwaiter();
if(!$awaiter.IsCompleted) 
{
     DO THE AWAIT/RETURN AND RESUME
}

var files = $awaiter.GetResult();

Here, we can see that if the awaited Task already completed, then it skips all the await/resume work and directly gets the result. Besides this fact, if you put "async" keyword on a method, a bunch of code (including the state machine) is generated regardless of the fact that you use await keyword inside the method or not. Keeping all these facts in mind, implementing the IFileManager as below is going to cause nothing but overhead:

public class InMemoryFileManager : IFileManager
{
    IEnumerable<File> files = new List<File>
    {
        //...
    };

    public async Task<IEnumerable<File>> GetFilesAsync()
    {
        return await Task.FromResult<IEnumerable<File>>(files);
    }
}

So, don't ever think about "await"ing on Task.FromResult or I'll hunt you down in your sweet dreams :)

References

How and Where Concurrent Asynchronous I/O with ASP.NET Web API

When we have uncorrelated multiple I/O operations that need to be kicked off, we have quite a few ways to fire them off and which way you choose makes a great amount of difference on a .NET server side application. In this post, we will see how we can handle the different approaches in ASP.NET Web API.
2014-02-21 22:06
Tugberk Ugurlu


When we have uncorrelated multiple I/O operations that need to be kicked off, we have quite a few ways to fire them off and which way you choose makes a great amount of difference on a .NET server side application. Pablo Cibraro already has a great post on this topic (await, WhenAll, WaitAll, oh my!!) which I recommend you to check that out. In this article, I would like to touch on a few more points. Let's look at the options one by one. I will use a multiple HTTP request scenario here which will be consumed by an ASP.NET Web API application but this is applicable for any sort of I/O operations (long-running database calls, file system operations, etc.).

We will have two different endpoint which will hit to consume the data:

  • http://localhost:2700/api/cars/cheap
  • http://localhost:2700/api/cars/expensive

As we can infer from the URI, one of them will get us the cheap cars and the other one will get us the expensive ones. I created a separate ASP.NET Web API application to simulate these endpoints. Each one takes more than 500ms to complete and in our target ASP.NET Web API application, we will aggregate these two resources together and return the result. Sounds like a very common scenario.

Inside our target API controller, we have the following initial structure:

public class Car 
{
    public int Id { get; set; }
    public string Make { get; set; }
    public string Model { get; set; }
    public int Year { get; set; }
    public float Price { get; set; }
}

public class CarsController : BaseController 
{
    private static readonly string[] PayloadSources = new[] { 
        "http://localhost:2700/api/cars/cheap",
        "http://localhost:2700/api/cars/expensive"
    };

    private async Task<IEnumerable<Car>> GetCarsAsync(string uri) 
    {
        using (HttpClient client = new HttpClient()) 
        {
            var response = await client.GetAsync(uri).ConfigureAwait(false);
            var content = await response.Content
                .ReadAsAsync<IEnumerable<Car>>().ConfigureAwait(false);

            return content;
        }
    }

    private IEnumerable<Car> GetCars(string uri) 
    {
        using (WebClient client = new WebClient()) 
        {    
            string carsJson = client.DownloadString(uri);
            IEnumerable<Car> cars = JsonConvert
                .DeserializeObject<IEnumerable<Car>>(carsJson);
                
            return cars;
        }
    }
}

We have a Car class which will represent a car object that we are going to deserialize from the JSON payload. Inside the controller, we have our list of endpoints and two private methods which are responsible to make HTTP GET requests against the specified URI. GetCarsAsync method uses the System.Net.Http.HttpClient class, which has been introduces with .NET 4.5, to make the HTTP calls asynchronously. With the new C# 5.0 asynchronous language features (A.K.A async modifier and await operator), it is pretty straight forward to write the asynchronous code as you can see. Note that we used ConfigureAwait method here by passing the false Boolean value for continueOnCapturedContext parameter. It’s a quite long topic why we need to do this here but briefly, one of our samples, which we are about to go deep into, would introduce deadlock if we didn’t use this method.

To be able to measure the performance, we will use a little utility tool from Apache Benchmarking Tool (A.K.A ab.exe). This comes with Apache Web Server installation but you don’t actually need to install it. When you download the necessary ZIP file for the installation and extract it, you will find the ab.exe inside. Alternatively, you may use Web Capacity Analysis Tool (WCAT) from IIS team. It’s a lightweight HTTP load generation tool primarily designed to measure the performance of a web server within a controlled environment. However, WCAT is a bit hard to grasp and set up. That’s why we used ab.exe here for simple load tests.

Please, note that the below compressions are poor and don't indicate any real benchmarking. These are just compressions for demo purposes and they indicate the points that we are looking for.

Synchronous and not In Parallel

First, we will look at all synchronous and not in parallel version of the code. This operation will block the running the thread for the amount of time which takes to complete two network I/O operations. The code is very simple thanks to LINQ.

[HttpGet]
public IEnumerable<Car> AllCarsSync() {

    IEnumerable<Car> cars =
        PayloadSources.SelectMany(x => GetCars(x));

    return cars;
}

For a single request, we expect this to complete for about a second.

AllCarsSync

The result is not surprising. However, when you have multiple concurrent requests against this endpoint, you will see that the blocking threads will be the bottleneck for your application. The following screenshot shows the 200 requests to this endpoint in 50 requests blocks.

AllCarsSync_200

The result is now worse and we are paying the price for blowing the threads for long running I/O operations. You may think that running these in-parallel will reduce the single request time and you are not wrong but this has its own caveats, which is our next section.

Synchronous and In Parallel

This option is mostly never good for your application. With this option, you will perform the I/O operations in parallel and the request time will be significantly reduced if you try to measure only with one request. However, in our sample case here, you will be consuming two threads instead of one to process the request and you will block both of them while waiting for the HTTP requests to complete. Although this reduces the overall request processing time for a single request, it consumes more resources and you will see that the overall request time increases while your request count increases. Let’s look at the code of the ASP.NET Web API controller action method.

[HttpGet]
public IEnumerable<Car> AllCarsInParallelSync() {

    IEnumerable<Car> cars = PayloadSources.AsParallel()
        .SelectMany(uri => GetCars(uri)).AsEnumerable();

    return cars;
}

We used “Parallel LINQ (PLINQ)” feature of .NET framework here to process the HTTP requests in parallel. As you can, it was just too easy; in fact, it was only one line of digestible code. I tent to see a relationship between the above code and tasty donuts. They all look tasty but they will work as hard as possible to clog our carotid arteries. Same applies to above code: it looks really sweet but can make our server application miserable. How so? Let’s send a request to this endpoint to start seeing how.

AllCarsInParallelSync

As you can see, the overall request time has been reduced in half. This must be good, right? Not completely. As mentioned before, this is going to hurt us if we see too many requests coming to this endpoint. Let’s simulate this with ab.exe and send 200 requests to this endpoint in 50 requests blocks.

AllCarsInParallelSync_200

The overall performance is now significantly reduced. So, where would this type of implementation make sense? If your server application has small number of users (for example, an HTTP API which consumed by the internal applications within your small team), this type of implementation may give benefits. However, as it’s now annoyingly simple to write asynchronous code with built-in language features, I’d suggest you to choose our last option here: “Asynchronous and In Parallel (In a Non-Blocking Fashion)”.

Asynchronous and not In Parallel

Here, we won’t introduce any concurrent operations and we will go through each request one by one but in an asynchronous manner so that the processing thread will be freed up during the dead waiting period.

[HttpGet]
public async Task<IEnumerable<Car>> AllCarsAsync() {

    List<Car> carsResult = new List<Car>();
    foreach (var uri in PayloadSources) {

        IEnumerable<Car> cars = await GetCarsAsync(uri);
        carsResult.AddRange(cars);
    }

    return carsResult;
}

What we do here is quite simple: we are iterating through the URI array and making the asynchronous HTTP call for each one. Notice that we were able to use the await keyword inside the foreach loop. This is all fine. The compiler will do the right thing and handle this for us. One thing to keep in mind here is that the asynchronous operations won’t run in parallel here. So, we won’t see a difference when we send a single request to this endpoint as we are going through the each request one by one.

AllCarsAsync

As expected, it took around a second. When we increase the number of requests and concurrency level, we will see that the average request time still stays around a second to perform.

AllCarsAsync_200

This option is certainly better than the previous ones. However, we can still do better in some certain cases where we have limited number of concurrent I/O operations. The last option will look into this solution but before moving onto that, we will look at one other option which should be avoided where possible.

Asynchronous and In Parallel (In a Blocking Fashion)

Among these options shown here, this is the worst one that one can choose. When we have multiple Task returning asynchronous methods in our hand, we can wait all of them to finish with WaitAll static method on Task object. This results several overheads: you will be consuming the asynchronous operations in a blocking fashion and if these asynchronous methods is not implemented right, you will end up with deadlocks. At the beginning of this article, we have pointed out the usage of ConfigureAwait method. This was for preventing the deadlocks here. You can learn more about this from the following blog post: Asynchronous .NET Client Libraries for Your HTTP API and Awareness of async/await's Bad Effects.

Let’s look at the code:

[HttpGet]
public IEnumerable<Car> AllCarsInParallelBlockingAsync() {
    
    IEnumerable<Task<IEnumerable<Car>>> allTasks = 
        PayloadSources.Select(uri => GetCarsAsync(uri));

    Task.WaitAll(allTasks.ToArray());
    return allTasks.SelectMany(task => task.Result);
}

Let's send a request to this endpoint to see how it performs:

AllCarsInParallelBlockingAsync

It performed really bad but it gets worse as soon as you increase the concurrency rate:

AllCarsInParallelBlockingAsync_200

Never, ever think about implementing this solution. No further discussion is needed here in my opinion.

Asynchronous and In Parallel (In a Non-Blocking Fashion)

Finally, the best solution: Asynchronous and In Parallel (In a Non-Blocking Fashion). The below code snippet indicates it all but just to go through it quickly, we are bundling the Tasks together and await on the Task.WhenAll utility method. This will perform the operations asynchronously in Parallel.

[HttpGet]
public async Task<IEnumerable<Car>> AllCarsInParallelNonBlockingAsync() {

    IEnumerable<Task<IEnumerable<Car>>> allTasks = PayloadSources.Select(uri => GetCarsAsync(uri));
    IEnumerable<Car>[] allResults = await Task.WhenAll(allTasks);

    return allResults.SelectMany(cars => cars);
}

If we make a request to the endpoint to execute this piece of code, the result will be similar to the previous one:

AllCarsInParallelNonBlockingAsync

However, when we make 50 concurrent requests 4 times, the result will shine and lays out the advantages of asynchronous I/O handling:

AllCarsInParallelNonBlockingAsync_200

Conclusion

At the very basic level, what we can get out from this article is this: do perform load tests against your server applications based on your estimated consumption rates if you have any sort of multiple I/O operations. Two of the above options are what you would want in case of multiple I/O operations. One of them is "Asynchronous but not In Parallel", which is the safest option in my personal opinion, and the other is "Asynchronous and In Parallel (In a Non-Blocking Fashion)". The latter option significantly reduces the request time depending on the hardware and number of I/O operations you have but as our small benchmarking results showed, it may not be a good fit to process a request many concurrent I/O asynchronous operations in one just to reduce a single request time. The result we would see will most probably be different under high load.

References

Tags