To reach 100% testing coverage is a dream for many teams. The metric used is code coverage for tests, but is that enough? Unfortunately not. Code line coverage is not the same as functional coverage. And it is full functional coverage that really matters in the end.
Look at a simple method that formats a string.
public static string Format(int? value)
const int defaultValue = 42;
value = defaultValue;
return "The value is " + defaultValue + ".";
There is a correct test for this method, that pass and gives 100% code coverage. Still there is a severe bug in there. Can you spot it? (If you don’t, just keep reading on, it will be obvious when another test is added later.)
Adding functionality to a library, without touching the existing code should be safe for clients, shouldn’t it? Unfortunately not, adding another overload to a library can be a breaking change. It might work when the library is updated, but suddenly break when the client is recompiled – even though no code was changed. That’s nasty, isn’t it?
When working with Kentor.AuthServices I’ve had to think through versioning more in detail than I’ve done before. When is the right time to go to 1.0? What is the difference between going from 0.8.2 to 0.8.3 or 0.9.0? When researching I found that the answer to all versioning questions is Semantic Versioning.
A version is on the form Major.Minor.Patch.
- Major is increased if there are breaking changes. The other numbers are reset.
- Minor is increased if there is added functionality that is non breaking. The patch number is reset.
- Patch is increased for backwards compatible bug fixes.
The terms breaking changes and backwards compatibile are keys to the definitions, so to use semantic versioning requires keeping tight control of what changes are breaking and not. In most cases it is quite simple, but there are a few pitfalls that I’ll explore in this post.
The background to this post is that I listened to Scott Meyers’ NDC talk Effective Modern C++ and was reminded on how C++ programmes have to keep track of all kinds of nastiness in the language that might turn into hard to track-down bugs. C# is a lot easier in many ways, with way fewer pitfalls, but sometimes I think we make it to simple to ourselves. C++ developers are often quite good at the language details because they have to. Being a C# developer it is possible to survive for much longer without knowing those details, but being ignorant of them will eventually result in nasty bugs. That eventuality will probably not happen on a lazy Tuesday morning with a lot of time to fix it. It will happen a late Friday afternoon, right before the important production deployment scheduled for the weekend…
As C# developers I think that we should have a look at the C++ community and see what we can learn from them. So let’s dive into some code and see how we can break things.
Working with code, there are some operations that are repeated many times every day, hour or even minute. Knowing (and creating) shortcut keys for those operations not only saves time, but keeps focus on the code. Reaching for the mouse might not take much longer time, but it switches the brain over to mouse-control mode and when doing that, some of the code context kept in mind is lost.
These are my favourite key bindings, both standard and non standard.
- Global go to file/symbol on Ctrl+,. Brings up a small search box in the current windows for quick navigation to any source file, class or method in the solution. With VS2013 this got considerable better as it no longer brings up a large dialog box.
- Go to current file in solution explorer on Ctrl+´, S. A somewhat awkward chord, but really useful to quickly get to the current file in the solution explorer. This is also the fastest and best way to do a rename of a class. Select the file in solution explorer, hit F2 to rename the file and Visual Studio will automatically prompt you about renaming the class and all references to it.
- When coding, I usually split the window into two vertical tab groups, to view two code files at the same time. Most of the time one is the current test case and the other is the implementation (yes I’m a TDD fan). Two custom bindings that I use a lot with the split window are Ctrl+Alt+Left and Ctrl+Alt+Right to move the active windows to the previous or the next tab group.
Did I mention that I love TDD? That means that the key bindings related to running tests are among those I use most.
Owin and Katana offers a flexible pipeline for external authentication with existing providers for authentication by Google, Facebook, Twitter and more. It is also possible to write your own custom authentication provider and get full integration with the Owin external authentication pipeline and ASP.NET Identity.
Anatomy of an Owin Authentication Middleware
For this post I’ve created a dummy authentication middleware that interacts properly with the authentication pipeline, but always returns the same user name. From now on I will use the names from that dummy for the different classes.
A typical Katana middleware is made up of 5 classes.
- The main
- The internal
DummyAuthenticationHandler class doing the actual work.
DummyAuthenticationOptions class for handling settings.
- An extension method in
DummyAuthenticationExtensions for easy setup of the middleware by the client application.
- An simple internal
Constants class holding constants for the middleware.
Owin makes it easy to inject new middleware into the processing pipeline. This can be leveraged to inject breakpoints in the pipeline, to inspect the state of the Owin context during authentication.
When creating a new MVC 5.1 project a
Startup.Auth.cs file is added to the project that configures the Owin pipeline with authentication middleware. By two middleware for authentication are enabled through calls to
app.UseExternalSignInCookie. There are also commented out sections for Microsoft, Twitter, Facebook and Google authentication. This post will use Google Authentication as an example and also add some “dummy” middleware that makes it possible to set breakpoints and inspect the authentication pipeline.
Inserting Breakpoint Middleware
The middleware is executed in the order they are listed in the file, so by inserting a simple middleware between the existing, it is possible to inspect how each middleware interact with the authentication pipeline.
The injected middleware is just a few lines of code, but it allows two breakpoints to be set: on the opening and closing braces, which enables inspection before and after the call to the next middleware.
app.Use(async (context, next) =>