How to mock DateTime.Now in unit test using Ambient Context Pattern

Note: There is a sample code for this article on GitHub which could be accessed from the following link : https://github.com/akazemis/TestableDateTimeProvider

One of the common dependencies that are sort of hard to test are methods that use DateTime.Now or DateTime.UtcNow in their body. Take this method for instance:

public bool IsThereWorldCupThisYear()
{
      var currentYear = DateTime.Now.Year;
      return ((currentYear - 1998) % 4) == 0; 
}

As you see, this method gets the current year and checks if there is any soccer world cup this year. Now suppose you want to write unit test for it. How would you do that? How would you test if it works correctly in 2018, 2022 and so forth. Would you change current system date and run your test? Not a good idea obviously!

In this method you have a dependency and that’s the static variable of DateTime.Now which returns the current system time. To be able to test this method you need to mock up that dependency in your tests. One way of doing that is using Shims  in dotnet (which is actually unconstrained isolation). There’s some problems with that though. Firstly, at the time of writing this article, it’s not supported by dotnet core. Secondly, it’s slow and thirdly, I don’t like that fake assembly referencing thing.

The cleaner way of doing that, if you’re using dependency injection in your project, is to create an interface and a class, wrapped around the DateTime.Now, injecting that helper wherever we need to get current system date in production code and mock it up in unit tests. Something like this:

 public interface IDateTimeHelper
 {
     DateTime GetDateTimeNow();
 }

 public class DateTimeHelper : IDateTimeHelper
 {
     public DateTime GetDateTimeNow()
     {
         return DateTime.Now;
     }
 }
 public class WorldCupHandler
 {
     private IDateTimeHelper _dateTimeHelper;
     public WorldCupHandler(IDateTimeHelper dateTimeHelper)
     {
        _dateTimeHelper = dateTimeHelper;
     }

     public bool IsThereWorldCupThisYear()
     {
         var currentYear = _dateTimeHelper.GetDateTimeNow().Year;
         return ((currentYear - 1998) % 4) == 0;
     }
 }

Then wherever you want to write unit test for the classes injecting IDateTimeHelper, you can easily mock it up. In our case, a unit test for WorldCupHandler would look like this (here I’m using xUnit.netMoq , and FluentAssertions in examples):

public class WorldCupHandlerTest()
{
    [Fact]
    public void IsThereWorldCupThisYear_WhenWorldCupYear_ReturnsTrue()
    {
       var mockDateTimeHelper = new Mock<IDateTimeHelper>();
       var fakeDate = new DateTime(2018, 05, 15);
       mockDateTimeHelper.Setup(o => o.GetDateTimeNow()).Returns(fakeDate);
       var worldCupHandler = new WorldCupHandler(mockDateTimeHelper.Object);
       
       var result = worldCupHandler.IsThereWorldCupThisYear();
       
       result.Should().Be(true);
    }

    [Fact]
    public void IsThereWorldCupThisYear_WhenNonWorldCupYear_ReturnsFalse()
    {
       var mockDateTimeHelper = new Mock<IDateTimeHelper>();
       var fakeDate = new DateTime(2020, 07, 10);
       mockDateTimeHelper.Setup(o => o.GetDateTimeNow()).Returns(fakeDate);
       var worldCupHandler = new WorldCupHandler(mockDateTimeHelper.Object);
 
       var result = worldCupHandler.IsThereWorldCupThisYear();

       result.Should().Be(false);
    }
}

It’s testable and clean and there is nothing wrong with it as long as you’re using dependency injection, and you have no problem with adding IDateTimeHelper as a dependency to your projects. Some people don’t like to add such a simple class as a dependency wherever they need system’s current date and time. As it sometimes happen to be a vast majority of classes in the project.

Additionally, what if you have a legacy code and you just want to refactor the code to replace the scattered DateTime.Now or DateTime.UtcNow in the code with your helper method to have more control over it and also make your code testable? If you’ve done such a thing before you would know how it hurts! You may end up with a dependency domino which affects most of your classes. You may end up  injecting IDateTimeHelper into almost every class in your code! That’s the case even for non-legacy code. Nonetheless you’re doing your project from scratch and too many modifications is not the problem, you may end up adding IDateTimeHelper as injected dependency everywhere, which doesn’t seem to be clean.

In this article, I’m going to suggest a clean approach to do so (at least I think it’s cleaner!). And that’s using Ambient Context Pattern and ThreadLocal class. Don’t fear the buzz words it’s not rocket science whatsoever.

Here’s the thing, what we need to do is just creating a helper class (in hear I’ve named it DateTimeProvider and made it a singleton), using that DateTimeProvider in our production code in a quite simple way, and simply wrap it in a context object once we need to fake it. Simple as that!

Here’s how we’d use it in our production code:


 public class WorldCupHandler
 {
     private IDateTimeHelper _dateTimeHelper;
     public WorldCupHandler(IDateTimeHelper dateTimeHelper)
     {
         _dateTimeHelper = dateTimeHelper;
     }

     public bool IsThereWorldCupThisYear()
     {
        var currentYear = DateTimeProvider.Instance.GetUtcNow().Year;
        return ((currentYear - 1998) % 4) == 0;
     }
 }

And here’s how we’d write our unit tests:

public class WorldCupHandlerTest
{
    [Fact]
    public void IsThereWorldCupThisYear_WhenWorldCupYear_ReturnsTrue()
    {
       var fakeDate = new DateTime(2018, 05, 15);
       DateTime result = default(DateTime);
       var worldCupHandler = new WorldCupHandler();

       using(var context = new DateTimeProviderContext(fakeDate))
       {
          result = worldCupHandler.IsThereWorldCupThisYear();
       }
       result.Should().Be(true);
    }

    [Fact]
    public void IsThereWorldCupThisYear_WhenNonWorldCupYear_ReturnsFalse()
    {
       var fakeDate = new DateTime(2020, 07, 30);
       var worldCupHandler = new WorldCupHandler();

       using(var context = new DateTimeProviderContext(fakeDate))
       {
           var result = worldCupHandler.IsThereWorldCupThisYear();
       }

       result.Should().Be(false);
    }
}

As you see in the code above, the only thing we need to do to fake the current system date as an arbitrary date is to wrap our method call in a “using” block that creates a new DateTimeProviderContext instance and set the date as it’s constructor’s argument. That’s it!

Now let’s have a look into the code and see how this magic works. Here’s our DateTimeProvider code:

public class DateTimeProvider
 {
    #region Singleton
    private static Lazy<DateTimeProvider> _lazyInstance = new Lazy<DateTimeProvider>(() => new DateTimeProvider());
    private DateTimeProvider()
    {
    }
    public static DateTimeProvider Instance
    {
        get
        {
           return _lazyInstance.Value;
        }
    }
    #endregion

    private Func<DateTime> _defaultCurrentFunction = () => DateTime.UtcNow;

    public DateTime GetUtcNow()
    {
      if (DateTimeProviderContext.Current == null)
      {
         return _defaultCurrentFunction.Invoke();
      }
      else
      {
         return DateTimeProviderContext.Current.ContextDateTimeUtcNow;
      }
   }
 }

Notice the “GetUtcNow()” method, we’re using a Static property named DateTimeProviderContext.Current to check if our method is being wrapped in a context or not. If it’s not wrapped, we would return the result of _defaultCurrentFunction which is a function delegate that returns the system current datetime. Otherwise it gets the DateTime from the context that wraps around our method call.

Let’s see how our DateTimeProviderContext looks like:

 public class DateTimeProviderContext : IDisposable
 {
    private static ThreadLocal<Stack> ThreadScopeStack = new ThreadLocal<Stack>(() => new Stack());
    public DateTime ContextDateTimeUtcNow;
    private Stack _contextStack = new Stack();

    public DateTimeProviderContext(DateTime contextDateTimeUtcNow)
    {
       ContextDateTimeUtcNow = contextDateTimeUtcNow;
       ThreadScopeStack.Value.Push(this);
    }
    public static DateTimeProviderContext Current
    {
       get
       {
          if (ThreadScopeStack.Value.Count == 0)
          {
             return null;
          }
          return ThreadScopeStack.Value.Peek();
       }
   }

    #region IDisposable Members
    public void Dispose()
    {
       ThreadScopeStack.Value.Pop();
    }
    #endregion
 }

And here’s where the magic happens. In the DateTimeProviderContext we’ve used Ambient Context Pattern in conjunction with ThreadLocal class to facilitate faking current system date and time.

First of all, notice that it implements IDisposable interface. So once we create a using block and wrap it around our code, it would create the context object at the beginning of the block and calls the Dispose() method at the end of the block. Therefore we can create the instance with a property of the fake date, which is being passed as the constructor’s argument, and add the context object to a stack of contexts. Then once the using block is closed, Dispose methods gets called and we would pop the context off the stack. And at any time we get the DateTimeProviderContext.Current, it would return the most inner context wrapped around our code.

Using stack we can even have nested context blocks in our test code if we need to do so. Like the following code :

 [Fact]
 public void GetUtcNow_WhenMultipleContext_ReturnsCorrectFakeUtcNow()
 {
     var fakeDate1 = new DateTime(2018, 5, 26, 10, 0, 0, DateTimeKind.Utc);
     var fakeDate2 = new DateTime(2020, 7, 15, 12, 30, 0, DateTimeKind.Utc);
     DateTime result1;
     DateTime result2;

     using (var context1 = new DateTimeProviderContext(fakeDate1))
     {
         result1 = DateTimeProvider.Instance.GetUtcNow();
         using (var context2 = new DateTimeProviderContext(fakeDate2))
         {
            result2 = DateTimeProvider.Instance.GetUtcNow();
         }
     }

     result1.Should().Be(fakeDate1);
     result2.Should().Be(fakeDate2);
 }

Why did we use  ThreadLocal<Stack> to keep our stack in? Because if we run our tests in parallel, they’re gonna run in separate threads and as we’re using a static field to hold the stack in, they would interfere with each other and cause intermittent errors. Using ThreadLocal would make that static field (ThreadScopeStack) thread-safe.

That’s it! Using this cool trick you can skip a lot of effort refactoring your code and make it testable. It took me a while to come up with this solution, hopefully you’d save much time reading this article.

The only downside of this approach is that we actually modified the production code’s logic to support unit testing scenarios which is not a good practice but I’d say that’s not gonna be a big problem in this specific case. As it does the job by minimum amount of modification in legacy code.

Not to mention, I’ve taken DateTime.Now or DateTime.UtcNow as an example of testing non-testable stuff , using the same technique you can handle similar scenarios in your code.

By the way, I have also made a repository on GitHub website and pushed the sample code in it, you can see the code in action in there.

Good Luck!

xUnit.net vs NUnit, a quick pragmatic comparison

At the beginning of our last project, which was a greenfield project on .NET Core, I was responsible to choose a testing framework, isolation framework and all tools and frameworks, related to unit and integration testing. So I started searching for a decent testing framework for .NET Core and came up with 2 major opensource candidates: NUnit and xUnit.net. In this article, I’m going to briefly go through the features of each framework and share my final verdict!

I’ve been using NUnit for many years with no problem whatsoever. So I tend to have a bias towards NUnit. My task was choosing the best testing framework though. So I put my experience and comfort a side and tried not to have any familiarity bias, as learning curve of a new testing framework wouldn’t be that steep.

NUnit
nunit-logo

NUnit is actually ported from a Java testing framework named “JUnit”. It’s a mature framework with a long history started from JUnit since 1998. Widely used in dotnet community, highly documented and at the time of writing this article 19225 questions are tagged as NUnit in stackoverflow.

xUnit.net
xunit

xUnit.net is a relatively new testing framework, written by original inventor of NUnit v2. At the moment, xUnit.net is the latest unit testing tool, and well-accepted by .net foundation. Designed to be extensible, flexible, fast and clean. So far, there are 4008 questions tagged as xunit in stackoverflow.

NUnit xUnit.net
Supported Platforms  UWP, Desktop, Windows Phone, Xamarin Android, Xamarin iOS, ASP.net  UWP, Desktop, Windows Phone, Xamarin Android, Xamarin iOS, ASP.net
.Net Core Support  YES YES
IDE Tools Support  Visual Studio, Visual Studio Code, Resharper  Visual Studio, Visual Studio Code, Resharper
CI Tools Support  TeamCity, VSTS, MSBuild, CuiseControl.net, Bamboo, Jenkins  TeamCity, VSTS, MSBuild, CuiseControl.net, Bamboo, Jenkins
Parallel Execution  YES  YES
Execution Isolation Level Per test class Per test method
Extensible Test Attributes  NO (Test and TestCase attributes are sealed)  YES (Theory and Fact attributes are extensible)

As you see, both NUnit and xUnit are strong, mature and widely adopted by .net community. So it’s not easy to say either of them is not good enough.

To me, the way that xUnit.net runs the tests is admirable. By default, it creates a new instance of the test class for each test method. This means that test methods are completely isolated and do not interfere each other. So it actually mitigates the risk of dependent test methods which is a bad practice. However, if in some rare cases you need to share the context among several test methods, there is a way to do that.

Another pro of xUnit.net over NUnit is more extensibility and flexibility. We can inherit from Theory, Fact and other attributes. Personally I don’t care about this feature though. As it never happened to be useful in any of the gigs I’ve done. But anyway, xUnit.net designers have had extensibilty in mind.

NUnit on the other hand, has a longer history. There are many more projects already done via NUnit, you can find many more documents, samples, and discussions about NUnit in developers’ communities. i.e, you can find 5 times as much as xUnit.net questions for NUnit in stackoverflow website.

The bottom line is: I’d rather xUnit.net over NUnit, mainly because of the way that it runs the tests, whereby test methods run in separate test class instances in isolation, it can also increases the level of parallelism. I like It’s nice style of coding as well.(using constructor/Dispose instead of ugly TestFixtureSetup and TestFixtureTearDown attributes)

N.B: I have also made a Powerpoint presentation to talk about the chosen tools and frameworks plus a quick introduction to TDD. You can see the slides in here.

How to make Web API endpoint’s base URL, configurable in Angular2 Nswag clients

For developers working on Angular2 + Web API projects, Swagger is a familiar name. Swagger is a nice tool to expose and provide a documentation and a UI for testing Web API controllers and methods (Generally speaking for RESTful APIs).

swagger

NSwag is another set of tools to create client classes of RESTful APIs. It’s actually a code generation tool whereby we can create Web API client classes (including Angular2 services in Typescript). By Web API clients I mean the classes that call Web API methods through HTTP and return their results. NSwag Studio is one of the handy tools we can use to generate the client classes. For more information about NSwag and how it works click here. As learning about Swagger and NSwag is out of this article’s scope.

The problem I’m going to address in this article though, is: “How to get NSwag clients to read their Web API endpoint from JSON config files.”.

Remember that if your Angular2 app is hosted on the same address as your Web API endpoint, then this article would not be relevant. As in these cases we don’t need to set the Web API endpoint. In this topic we’re assuming that the Angular2 app is going to call Web API methods from a separate end point (which is the case quite often, in a well-architectured Angular app).

If you take a look into the NSwag studio generated classes, you will see something like this:

export const API_BASE_URL = new OpaqueToken(‘API_BASE_URL’);

export interface IAuthClient {
authenticate(loginInfo: LoginInfoViewModel): Observable;
}

@Injectable()
export class AuthClient extends ApiClientBase implements IAuthClient {
private http: Http = null;
private baseUrl: string = undefined;
protected jsonParseReviver: (key: string, value: any) => any = undefined;

constructor(@Inject(Http) http: Http, @Optional() @Inject(API_BASE_URL) baseUrl?: string) {
super();
this.http = http;
this.baseUrl = baseUrl ? baseUrl : “”;
}

authenticate(loginInfo: LoginInfoViewModel): Observable {
let url_ = this.baseUrl + “/api/Auth/Authenticate”;

const content_ = JSON.stringify(loginInfo ? loginInfo.toJS() : null);

return this.http.request(url_, this.transformOptions({
body: content_,
method: “post”,
headers: new Headers({
“Content-Type”: “application/json; charset=UTF-8”,
“Accept”: “application/json; charset=UTF-8”
})
})).map((response) => {
return this.transformResult(url_, response, (response) => this.processAuthenticate(response));
}).catch((response: any, caught: any) => {

As you see in the code (notice the highlighted ones in particular), it’s using a constant named “API_BASE_URL” , which is being injected into the client’s constructor to set the Web API endpoint base url. If there is no provider for the API_BASE_URL, then it returns empty string which results in calling the Web API methods from the same url as Angular2 app is being hosted on.

Defining a provider for API_BaseURL, would be as easy is adding the highlighted code to the app module, provided that we want to simply hardcode the base url.

import { API_BASE_URL } from ‘../../nswagclients’;

@NgModule({
imports: [ …],
providers: [
{
provide: API_BASE_URL,
useFactory: () => {
              return ‘http://mywebapiurl.com&#8217;;
       }
}
],

})
export class AppModule {
}

Hard-coding a configuration though is not what we want in a real world project. We want to be able to change our configs without any need to rebuild our project. What we want is to have a simple JSON file, set the endpoint and refresh the browser to take effect.I’ve seen many JavaScript developers who don’t care about hard-coding such things and tend to fix these problems through task runners (such as Gulp and Grunt). Please don’t do that! Always go No! No! to hard-coding stuff, whether you’re coding in Java or C# that compiles the code into binary files, or in JavaScript/Typescript that goes plain text.

Once we want to take the Base Url out of the code, we face a serious problem. Our code needs to read configs from another file which is not a part of our code. You got the problem? That’s it!, we need to make an asynchronous call to the JSON file through HTTP, load the JSON file’s content and then get the API_BASE_URL‘s provider to return the value read from the JSON. It may seem easy thing to do but it’s not if you don’t know how to do it right. The main problem is that your provider’s factory method returns the value before the JSON config file is loaded.

To solve this problem we can use an Angular2 build-in provider named APP_INITIALIZER. This provider, simply runs the code we need to run before the Angular app gets started. So, in this case we’re going to call a method from a class that loads the JSON file and then sets a static field in the class, so that we can return the value in the API_BASE_URL’s factory.

import { API_BASE_URL } from ‘../../nswagclients’;
import { WebApiEndpointConfigService } from ‘../../services/web-api-endpoint-config.service’;
import { NgModule, APP_INITIALIZER } from “@angular/core”;

import { AppConfig } from ‘../../app-config’;

@NgModule({
imports: [ …],
declarations: […],
exports: […],
providers: [
WebApiEndpointConfigService,
{
    provide:APP_INITIALIZER,
    useFactory:() => ()=>
    {
let appConfig= ReflectiveInjector.resolveAndCreate([AppConfig]).get(AppConfig);

        let promise= appConfig.load().toPromise();
        return promise;
    },
    multi:true
},
{
    provide: API_BASE_URL,
    useFactory: () => {
         return AppConfig.webApiEndpointUrl;
    }
}
],
bootstrap:[]
})
export class AppModule {
}

As you see, in the code above, we’re using a service called WebApiEndpointConfigService, the code below shows how it is defined:

import { Injectable } from ‘@angular/core’;
import { Observable } from ‘rxjs’;
import { Http, Response } from ‘@angular/http’;
import { environment } from ‘../../environments/environment’;

@Injectable()
export class WebApiEndpointConfigService {
private _config: any = null;
private _env: string;
constructor(private http: Http) {
}
public load() {
var env = environment.production ? ‘production’ : ‘development’;
this._env = env;
let envConfigFilePath = ‘../../assets/config/’ + env + ‘.json’;
return this.http.get(envConfigFilePath)
.map(res => res.json())
.map((configData) => {
this._config = configData;
return configData;
}).catch(
error => {
console.error(error);
return Observable.throw(error.json().error || ‘Server error’);
});

}
getConfig() {
if (this._config === null) {
return this.load().map(config => { if (config.enableDebug) { console.log(‘Web Api         Endpoint:’ + config.webApiEndpoint); } return config; });
}
else {
return Observable.create((observer) => {
if (this._config.enableDebug) {
console.log(‘Web Api Endpoint:’ + this._config.webApiEndpoint);
}
observer.next(this._config);
});
}
}
}

And here’s the content of AppConfig class used in the provider code:

import { WebApiEndpointConfigService } from ‘./services/web-api-endpoint-config.service’;
import { HttpModule, Http, XHRBackend, ConnectionBackend, BrowserXhr, ResponseOptions, XSRFStrategy, BaseResponseOptions, CookieXSRFStrategy, RequestOptions, BaseRequestOptions} from ‘@angular/http’;
import {ReflectiveInjector, Injectable} from ‘@angular/core’;
@Injectable()
class MyCookieXSRFStrategy extends CookieXSRFStrategy {
constructor(){
super(”,”);
}
}

@Injectable()
export class AppConfig{
public static webApiEndpointUrl: string;
private static configLoaded:Boolean = false;

constructor() {
}

load(){
if(AppConfig.configLoaded)
{
return;
}
let injector: any = ReflectiveInjector.resolveAndCreate([
WebApiEndpointConfigService,
Http, BrowserXhr,
{ provide: ConnectionBackend, useClass: XHRBackend },
{ provide: ResponseOptions, useClass: BaseResponseOptions },
{ provide: XSRFStrategy, useClass: MyCookieXSRFStrategy },
{ provide: RequestOptions, useClass: BaseRequestOptions }
]);
let configService: WebApiEndpointConfigService =         injector.get(WebApiEndpointConfigService);
return configService.getConfig().map(config => {
AppConfig.webApiEndpointUrl = config.webApiEndpoint;
AppConfig.configLoaded = true;
return config;
});
}

}

Remember that the reason why we had to use ReflectiveInjector in Appconfig class is that at the time of running this code, there is no provider to inject the WebApiEndpointConfigService. So we have to inject the service and the whole chain of its dependencies.

And here is the last piece of the puzzle which is our JSON config file:

{
“webApiEndpoint”: “http://myWebAPIEndpoint.com&#8221; ,
“enableDebug”: true
}

So let’s briefly review what we’ve done to make the configurable endpoint happen:

  1. We made 2 config filesdevelopment.json or production.json , each of which representing the config for development and production environments.
  2. We defined a service that loads the JSON config file from the current Angular2 app’s host. We’re assuming that you’re using Angular CLI structure for your NG2 app. So as you see, we’re considering environment variables to load the pertinent config file which could be either development.json or production.json
  3. We made a class named AppConfig that holds the endpoint base url in a static variable and a load method that calls the service defined in step 2 and sets the pertinent static variable (webApiEndpointUrl) to the read value.
  4. In our AppModule, an APP_INITIALIZER provider was added that returns a Promise from the load method, defined in step 3 as a result of its factory method. That’s the key thing to run an asynchronous code in application start, and wait for the result before starting the app.
  5. Finally, another provider was added to the AppModule, to enable the API_BASE_URL injection to the NSwag code. The factory method of API_BASE_URL, simply returns the value of the static field defined in the step  4.

I spent a few days to come up with this solution to get rid of hard-coded base url for RESTful services. I hope this article saves time and effort for all fellow Angular2 developers.

Enjoy coding fellas!

Does TDD really matter?

Test Driven Development (TDD) has been around for a quite a while (since 2003) and nowadays, somehow works like a buzzword in developers’ resumes.

Many decent companies around the world would highly consider TDD skills, experience, and more importantly TDD tendency of their job applicants, in their recruitment process. Sometimes, their candidates’ TDD adherence even matters more than any other skill such as knowing about new frameworks and technologies. But why is that? Does TDD really matter in real world projects or it’s yet another buzzword in developer’s recruitment?

unit-testing-joke

In this article I’m not going to bother writing yet another article about TDD, as there is already more than enough. The thing that I’m going to focus here is: what difference it would make on the software design and software and code quality as well as software developer’s level of expertise.

Let’s have a quick look into TDD to see what is it about and then we’ll get back to our main point:

By definition Test Driven Development or TDD is a software development process that relies on the repetition of a short development cycle. Each cycle consists of the following steps:

  • Add some tests
  • Run the tests and check if they all fail
  • Write some code to pass the tests
  • Run the tests and check if they all pass
  • Refactor the code if needed
  • Repeat the cycle for the next requirement

test-driven-development

First of all, once we’re talking test, we mean automated test as opposed to end-to-end manual test. Which means we’re writing code to test the actual code!

Generally speaking, we’ve got the following 3 types of automated tests (specifically functional tests):

  • Unit testing : which tests one and only one method(function) regardless of all the dependencies of the method (such as database, file, network resource, etc.)
  • Integration testing : that tests a combination of methods and components to see if they work properly while integrated with each other.
  • End-to-end (E2E) automated testing (using tools such as Selenium or Protractor in AngularJS)

In TDD we could have all 3, traditionally though, we mean Unit and Integration testing once talking automated tests.

At first as a developer, it doesn’t make sense to write a code that tests something which doesn’t actually exist. But that’s one of the main points. The tests actually represent the specs we are about to implement. All the components in our software are there because we are expecting them to do something. So before writing the code in a TDD manner, we need to be clear about specifications (expected behavior) of the system. If we have clear specs we would be able to write the code that exactly needed.

So let’s highlight the first point in TDD : be clear about the software module’s specs . Why is it important? To answer this question you need to experience working in both TDD and non-TDD teams. Well … I can bet you on this, If you do TDD for a while and then get back to a non-TDD environment you can feel the chaos, reworks and ping pong game between the Business Analysts (BAs) QA (End to end testing team) and developers. Which is a bloody vicious and tedious game.

In the future I’ll be writing about BDD (Behavior Driven Development) which is a kind of evolution in TDD in terms of having clear specs, I’m not going to get into more details on that topic here though.

When you write test and then the code, you are actually covering the code’s health and integrity with your unit tests. This means that if you develop a component and in the future another developer comes and changes the component to add a new feature, and his code breaks some functionalities of your code, he will realize the problem as soon as possible. So he’d be able to fix it easily and briskly. Even if he’s a careless developer and doesn’t run the tests before pushing to the code repository, using a CI (Continuous Integration) server, everyone in the team will receive a notification email which indicates the change has broken the code and caused a test to fail.

So another benefit we can get from TDD is : protecting our code from breaking changes and enabling the team to find the bugs at the very beginning of their occurrence. Remember though, that we need to do code review plus Continuous Integration (CI) in conjunction with unit testing and integration testing  to achieve a robust mechanism to protect the code against breaking changes.

Note that, Code review plays a crucial role in this process, for instance if a developer changes the code and tests in a way that tests are wrong and pass with a wrong code then we’re screwed! :))) Therefore we can say that reviewing tests is even more substantial than reviewing the actual code!  (I’ll dedicate separate articles on Code Review and CI/CD quite soon.)

To me, the most important plus of doing TDD is its impact on design and quality of the code. Writing code in a TDD fashion is subject to writing testable code which takes a different style of software design. At first, testability might seem trivial, but quite a lot of value is buried behind it.

A testable code, tend to adhere SOLID design principles which are fundamental object oriented design principles and the root of many design patterns and best practices. SOLID is actually an abbreviation of 5 basic principles:

  • (S): Single responsibility Principle
    Which says: each method should be only responsible to do one and only one thing.A class should be responsible to do a single job.
  • (O): Open/Close Principle
    Your code should be open to extension and close to modification. Which means too extend the software’s functionality, we should be adding code rather than modifying the current code.
  • (L): Liskov Substitution Principle
    Derived classes must be completely substitutable with their base classes. (This principle is subject to more explanation which is not relevant to this topic)
  • (I): Interface Segregation Principle
    Classes should not be forced to depend upon interfaces that they don’t use.(This principle is subject to more explanation which is not relevant to this topic)

  • (D): Dependency Inversion Principle
    Depend upon abstractions instead of concrete classes. Which means we need to write a code that all dependencies are interfaces or pure abstract classes (with no implementation) rather than concrete types (non-abstract classes). In fact, Dependency Injection which is a widely used design pattern is based on this principle. I’ll cover this topic later on, in another topic.

Writing testable code, forces developers to stick to the Single responsibility principle. As we need to write unit test for our methods and each unit test should test only one thing. So a method with multiple responsibilities would be hard and cumbersome to test.

We have to apply Open/Close principle while doing TDD, because if we do so, we can just add new code and write test for that specific extension rather than modifying the code and its pertinent tests. Therefore, a developer that breathe in a test driven environment, tend to adhere Open/Close principle by using best practices and design patterns (such as strategy, decorator, bridge, etc.)

More importantly, the need for testability makes developers to use dependency injectionin code to be able to replace the actual dependencies with a fake implementation. For instance consider a repository class that relies on a database context object (say it’s using Hibernate session classes or Entity Framework’s DbContext object). If this database context is injected into the repository, we could unit test the repository by faking the DB context object and without any need to connect to the actual database. Technically, it’s called testing in isolation. Once we’re unit testing, we should isolate the component or the system under test (SUT) from all of it’s dependencies.

You can see a code I’ve pushed into github, as a sample of unit testing repository classes to make a better sense of how dependency injection facilitates unit testing.
To see the sample repository class click here and to see the unit test class for that repository click here!

Using dependency injection, would result in a clean component decoupling in our app which is a crucial factor in software design. And test driven attitude would force developers to write loosely coupled classes to be able to test them in isolation.

The point I’m trying to make in this article is that a test driven attitude is not just about writing automated tests for our code, but rather leads to an immense impact on our design and coding style as well. In fact, a developer with TDD skills and experience is the one who a professional development team can count on.

To wrap up, I would say “Hell YES!!! TDD really matters and is a must for a developer who wants to work in a professional development team.” So if you want to be a real developer, start learning or brush up your TDD expertise, rather than whining about the companies that excessively care about TDD!

.NET Core 1.0 : a giant leap in .net world

Microsoft, finally released its new generation of .net framework named .NET Core on June 27th 2016. I believe, this version would be a turning point in .net development and would be a giant leap in .net stack.

The reason why I believe so, being the combination of C# language  features and convenience of coding plus a significant improvement in the framework’s performance, as well as cross-Platform support for major operating systems like Windows, Linux and MacOS . Adding the open source code and potentially growing community to that, I believe it’s not just yet another .net version, but a foundation for a new powerful technology stack.

The benchmarks published on Github are showing exciting results :

https://github.com/aspnet/benchmarks

Here is an example of benchmarks done on HTTP server, showing that .NET performs 3x faster than NodeJS!!! Which sounds unbelievable to me! Check out this comparative benchmark copied from the mentioned GitHub page :

Plain Text Performance benchmark

Similar to the plain text benchmark in the TechEmpower tests. Intended to highlight the HTTP efficiency of the server & stack. Implementations are free to cache the response body aggressively and remove/disable components that aren’t required in order to maximize performance.

Stack Server Req/sec Load Params Impl Observations
ASP.NET 4.6 perfsvr 57,843 32 threads, 256 connections Generic reusable handler, unused IIS modules removed CPU is 100%, almost exclusively in user mode
IIS Static File (kernel cached) perfsvr 276,727 32 threads, 512 connections hello.html containing “HelloWorld” CPU is 36%, almost exclusively in kernel mode
IIS Static File (non-kernel cached) perfsvr 231,609 32 threads, 512 connections hello.html containing “HelloWorld” CPU is 100%, almost exclusively in user mode
NodeJS perfsvr 106,479 32 threads, 256 connections The actual TechEmpower NodeJS app CPU is 100%, almost exclusively in user mode
NodeJS perfsvr2 (Linux) 127,017 32 threads, 512 connections The actual TechEmpower NodeJS app CPU is 100%, almost exclusively in user mode
ASP.NET Core on Kestrel perfsvr 313,001 32 threads, 256 connections Middleware class, multi IO thread CPU is 100%
Scala – Plain perfsvr 176,509 32 threads, 1024 connections The actual TechEmpower Scala Plain plaintext app CPU is 68%, mostly in kernel mode
Netty perfsvr 447,993 32 threads, 256 connections The actual TechEmpower Netty app CPU is 100%

To be honest, I can’t believe such a performance improvement yet, I need to touch it myself and do my own benchmark! I’ll be sharing my results and writing about it in the future.

By the way, if you’re into Docker, there is a docker image for .NET core downloadable from here : https://www.microsoft.com/net/core#docker

Check usernames availability!

What I’m gonna introduce in this post, has nothing to do with tech or software development stuff, but it could be handy once in a blue moon!

Today, I was looking for something else on the web and  just came across a cool website in which you can easily check the availability of a username in heaps of social networks in just a few seconds.

Here’s the address: http://checkusernames.com/ , have fun with it! :))

Tools to detect websites’ techs

As a web developer, most of the time I come across cool stuff (technology-wise) once I’m surfing the web and I’m always wondering, what kind of framework, library or technology has been used for their development.

I used to go with old school methods like viewing the website’s source code or inspecting elements (through dev tools) in my browser to grasp the libs and frameworks used in the web page. But you know, it takes time and thought, sometimes the code doesn’t make much sense and for some libraries about which we know almost nothing , it gets tricky to realize how things are working in the page.

In this post, I’m goint to introduce some handy tools I’ve found to detect tech stuff quickly with actually no effort.

One of my favorite tech detectors is Wappalyzer, which is actually an add-on for Chrome, Firefox and Opera browsers. You just need to simply install the add-on and that’s it! Once you’re surfing the web it’s going to show you all the detected technologies through icons in your address bar.If you click on the icons, it shows a full list of the used stuff. It can detect the server-side tech stacks/platforms, web servers, even the CMS used, and all the JavaScript libraries. If you’re wondering about any of them you can see a brief introduction to the tech in the Wappalyzer website (by clicking on tech name in the list) and go to the official website of the pertinent technology.

wappalyler

Builtwith is another cool  web-base tech detector I normally use. All what you need to do is to put the URL in www.builtwith.com and see the result. It shows much more details about the hosting, and server-side aspects.

ebay