Static - A Force for Good and Evil

Introduction

I’ve noticed a trend among C# programmers which is to avoid the use of the “static” keyword. It seems I’m not the only one who’s noticed this either [1]. It’s not inherently limited to C# programmers as C++ can be written in a similar manner, but the terminology bias (functions vs. methods) and its clearer multi-paradigm stance means it’s probably less susceptible.

 

There is a perception that “static” things – data and methods – are bad. In the wrong hands that can be true, but by throwing the proverbial baby out with the bathwater we have closed the door on embracing some of the goodness that functional-style programming brings.

 

This article attempts to dispel the myths by illustrating which uses of “static” are bad and which are actually beneficial.

Shared Mutable State

My guess is that the static keyword has got a bad rap because of past transgressions caused by functions that were designed decades ago in a time when re-entrancy and multi-threading were something only specialist programmers had to worry about. Yes strtok() I’m looking at you.

 

This old C function which is used to tokenise a string has some serious side-effects. Behind the scenes it keeps track of the string being tokenised (which it also modifies) so that you can keep calling it without supplying the original input string when fetching the next token:-

 

char input[] = "unit test ";

char separators[] = " ";

 

assert(strcmp(strtok(input, separators), "unit") == 0);

assert(strcmp(strtok(NULL, separators), "test") == 0);

 

In a single-threaded environment you have to be careful not to “nest” use of it (e.g. tokenise a token), and in a multi-threaded environment this kind of behaviour is a disaster waiting to happen. Fortunately many C implementations managed to avoid ruining a programmer’s day due to spurious errors by utilising thread-local storage, but this was a courtesy and not standards-defined behaviour.

 

The anti-pattern, for want of a better term, which can lead to this kind of sorry state of affairs, is to take a simple function that only depends on its inputs and then find you need to add new behaviour without changing its interface. In a waterfall-esque development process the new behaviour could be the need to cache results, and the inability to change the interface might come from discovering this very late during The Testing Phase. Of course adding a cache and then accidentally making it non-thread-safe is only going to exacerbate your woes at this point of the cycle. More likely it doesn’t need to be thread-safe initially but does later; only no one notices it’s not. The C# example below shows how easy it can be to naively add a cache to an existing class:-

 

public static class ThingyProcessor

{

    public static int CalculateThing(int input)

    {

        int output;

 

        if (Cache.TryGetValue(input, out output))

            return output;

 

        // Long calculation...

 

        Cache.Add(input, output);

 

        return output;

    }

 

    // Non-thread-safe collection

    private static Dictionary<int, int> Cache =

                   new Dictionary<int, int>();

}

 

Sharing mutable state via global variables (public static properties) is definitely a very bad smell and has been for many years, but also sharing it implicitly across threads can be dangerous too. It’s not just a matter of ensuring our own types are safe, it’s the whole object graph, so any 3rd party collection types have to be checked too.

 

Although I’ve focused on complex types above, it should be noted that primitive values are not immune from this problem either. If anything they are likely to “appear to work” more than a complex type due to their small footprint. Use of volatile and the various Interlocked helper methods are required to keep them behaving properly.

 

Sometimes sharing mutable state is a necessary evil but there should be precious few times when we need to enter those murky waters. If possible we should look to change the interface or find some other design to make the problem disappear altogether and keep the code simple.

 

Before moving on let’s just go back to the procedural world of C to look at how we might tackle this problematic function. Putting aside for the moment the fact that strtok() is a published function specified by a standards process, we could avoid its state problem by changing the interface to allow the state to passed back in by the caller. Also we’d tackle the mutation of the input string to restrict the side-effects to just the state object.

 

typedef struct strtok

{

    const char* string;

    const char* separators;

    const char* tokenBegin;

    const char* tokenEnd;

} strtok_state_t;

 

const char* input = "unit test ";

const char* separators = " ";

strtok_state_t state;

 

strtok(input, separators, &state);

assert(strncmp(state.tokenBegin, "unit", state.tokenEnd - state.tokenBegin) == 0);

 

strtok(NULL, NULL, &state);

assert(strncmp(state.tokenBegin, "test", state.tokenEnd - state.tokenBegin) == 0);

 

There is still more that could be done to improve matters, such as using two separate functions (e.g. firsttok()/nexttok()). But this is C and combining state and functions into a more cohesive package is exactly what object-orientation allows us to do more cleanly in languages like C++ and C#. Hence in OO you might choose to present a strtok–style class in C# like this:-

 

public class StringTokeniser

{

    public StringTokeniser(string input, string separators)

    {

        // Remember inputs

    }

 

    public string NextToken()

    {

        // Find next separator or the string end by

        // searching onwards from the last ‘position’.

    }

 

    private readonly string _input;

    private readonly string _separators;

    private int _position;

}

Shared Immutable State

When talking about the pitfalls of “shared state” it’s important to qualify what sort of state you’re talking about. As we just discussed, shared mutable state can be dangerous when done badly. In contrast shared immutable state is much safer; at least once the potentially tricky initialisation phase is complete. Once we have a read-only data structure it can be used concurrently without the need for any kind of locking. Even if it can be referenced globally, which may still be another smell; it cannot be changed behind our backs.

 

For example, imagine you’re writing a simple XML parser in C#. To handle the translation of entity references, such as “&amp;” to their equivalents you might decide to use a lookup table. This table will likely be immutable and so it requires no additional locking in the event that two threads attempt to parse separate XML documents concurrently.

 

public class XmlParser

{

    private static string DecodeEntity(string entity)

    {

        string output;

 

        if (EntityTable.TryGetValue(entity, out output))

            return output;

 

        return entity;

    }

 

    private static Dictionary<string, string> EntityTable =

                   new Dictionary<string, string>

    {

        { "&amp;", "&" },

        { "&gt;",  ">" },

        { "&lt;",  "<" },

    };

}

 

The initialisation issue is handled for us by the C# language which guarantees that type constructors are thread-safe. That said we need to be especially careful inside a type constructor because if it throws things start going horribly wrong as the type can’t be loaded.

Static Classes

C#, unlike C++, does not allow methods to exist outside of classes (often called free functions in C++). Consequently you are forced into defining a class even when all you want to write is a simple function. I suspect this has an undesirable side-effect on C# programmers because I’ve seen them create classes that hold no state, or only hold compile-time immutable state (i.e. constants), e.g.:-

 

public class ConfigurationSettings

{

    public string DatabaseName { get { return ". . . "; } }

}

 

It’s not always as obvious as this and it might be returned as a property of another class. These extra levels of indirection make it harder for a static code analysis tool to spot it and suggest a refactoring.

 

public class Configuration

{

    public ConfigurationSettings Settings { get { return _settings; }}

 

    private readonly ConfigurationSettings _settings =

                     new ConfigurationSettings();

}

 

In a managed environment like C#, classes such as the ConfigurationSettings class above are quite literally garbage – the objects just get created and then destroyed again and their behaviour can be determined entirely at compile-time. As with free functions, constants in C# also need to be defined as part of a class too:-

 

public static class ConfigurationSettings

{

    public const string DatabaseName = ". . . ";

}

 

The C# answer to classes which shouldn’t be instantiated is to declare them “static”. In essence the class is now acting as merely a namespace, albeit one that you can’t elide with a “using” declaration at the top.

 

The canonical example in C# for a static class of “pure” functions (deterministic functions that only depend on their inputs and have no side-effects) is probably the Math class which plays hosts to fundamentals like Abs()and Min().

 

public static class Math

{

    public static int Abs (int value)

    {

        return (value < 0) ? –value : value;

    }

}

Static Functions

Right back at the beginning I suggested one reason why there might be a fear of static functions is because of where their implementation could end up. I’d also suggest that programmers find it easier to pass parameters to functions by making them implicit, i.e. through class members accessible via “this”.

 

Back in the 1980s Meilir Page-Jones wrote a book called The Practical Guide to Structured System Design. He goes into detail about the various types of coupling we might see in code, with each category being viewed as a less desirable form from a maintenance perspective. Whilst the most serious forms of coupling should be avoided, Page-Jones suggests that the weaker forms can be used effectively in the right hands, but also have the potential to cause grief in the wrong ones.

 

At the farthest end of the spectrum we have Content Coupling which is of little concern in today’s languages. Back in the days of assembler programming you could jump from the middle of one “function” right into the middle of another, meaning you couldn’t ever be sure where you’d come from. Next up is Common Coupling, i.e. global variables. As the name implies they have the ability to affect any and every part of the code-base in unanticipated ways.

 

Then we come to Control Coupling. This is where one function passes some kind of flag or signal to tell another how to behave. Depending on the direction of the signal either a child is telling its parent how to behave or the parent might know too much about the child’s implementation, either way it’s a symptom of low cohesion.

 

Moving onwards we come to Stamp Coupling. This oddly named formed of coupling is about passing excessive input to functions that don’t need it. For example, if you had a function that formatted a customer’s full name from their first and last names, you should consider only passing those two arguments, not an entire Customer record. By passing the entire type you make the function (appear to be) dependent on attributes it doesn’t use.

 

Finally we reach Data Coupling which is analogous to a “pure” function. What Page-Jones tells us is that the easiest code to reason about is this style of function which, as mentioned earlier, has a deterministic output solely based on its direct inputs with no side-effects. In essence, given that some form of coupling is a necessity to do anything useful, then Data Coupling is the most preferable.

Member Coupling

His book was published in a time before Object-Orientated Programming was A Big Thing. He is also concerned more with inter-module coupling rather than intra-module coupling, such as between methods of the same class. As the size of a class grows it becomes more common to rely further and further on data being passed between methods via its own state, i.e. its members. I believe there is a form of Stamp Coupling going on here as any method might use its input arguments plus any aspect of the object’s current state or per-class state, and so it is impossible to know what that is without looking at the implementation of an instance method. And that is what coupling is all about – being able to reason about the knock-on effects of a change to other parts of the code.

 

The over reliance of implied state makes it harder to refactor code later because pulling that state out to another data structure may require lots of unexpected fixing up of other methods. I’ve found that taking Page-Jones’s advice to favour Data Coupling right into the heart of classes has made code easier to read because simple methods start looking like simple black boxes again.

 

As a simple example consider the class-based OO version of the strtok() function I mentioned earlier. In the implementation, when it comes to finding the next token we could rely solely on the implied state held in the member variables and code it up as one method:-

 

public class StringTokeniser

{

    . . .

    public string NextToken()

    {

        if (_position == _input.Length)

            return null;

 

        var start = _position + 1;

        _position = _input.IndexOf(_separators, start);

        if (_position == -1)

            _position = _input.Length;

 

        return _input.Substring(start, _position - start);

    }

 

    private readonly string _input;

    private readonly string _separators;

    private int _position = -1;

}

 

One alterative would be to hand-off the finding off the end of the token to a separate little method that is only dependent on its inputs:-

 

public class StringTokeniser

{

    . . .

    public string NextToken()

    {

        if (_position == _input.Length)

            return null;

 

        var start = _position + 1;

        var end = FindTokenEnd(_input, _separators, start);

        var token = _input.Substring(start, end - start);

 

        _position = end;

        return token;

    }

 

    public static int FindTokenEnd(string input, string separators,

                                   int start)

    {

        var end = input.IndexOf(separators, start);

        if (end == -1)

            return input.Length;

        return end;

    }

 

    private string _input;

    private string _separators;

    private int _position = -1;

}

 

Although the first implementation of NextToken() is quite small there is perhaps a temptation to put a comment above the bit of code that finds the end of the token because it’s not immediately apparent due to the overloaded use of the _position member (initial start of the next token and then the end of next token). Whenever I find myself wanting to write a comment I consider that to be a sign I should use the Extract Method or Introduce Explaining Variable [2] refactorings instead.

 

There might be a knee-jerk reaction that splitting code up into so many simple methods would create a big hit on performance. It is possible, but then we all know that premature optimisation is a dangerous pastime. The JIT compiler in .Net and modern C++ compilers can do a pretty good job these days of inlining methods so you’ll probably not notice it in the vast majority of your code.

Exception Safety

Whilst it’s highly unlikely that any client code would attempt to recover directly from an OutOfMemory exception thrown from our StringTokeniser class, it is a library function and they often get used in mysterious ways. Writing exception safe code is hard, especially when it’s so easy to mutate state at an unsafe moment.

 

A common example I’ve seen of this is when two-phase construction is used and the second phase throws an exception, leaving a member mutated by accident:-

 

public class ProcessManager

{

    . . .

    public void ExecuteJob(Job job)

    {

        if (_process == null)

        {

            _process = new Process();

            _process.Start(_applicationName);

        }

        . . .

        _process.Execute(job);

    }

    . . .

    private readonly string _applicationName;

    private Process _process;

}

 

Here, if ExecuteJob() throws when the Start() method is called the _process member will be left pointing to a partially initialised object. When the second call to ExecuteJob() comes in it will assume the _process member is fully initialised and will try and to use it.

 

The general pattern for writing exception safe code is to perform all code that might throw off to the side and then commit the changes locally with non-throwing operations. In this example we could have written it like this:-

 

    public void ExecuteJob(Job job)

    {

        if (_process == null)

        {

            var process = new Process();

            process.Start(_applicationName);

 

            _process = process;

        }

        . . .

        _process.Execute(job);

    }

 

Internal factory methods are a good fit for being static because object creation and initialisation is often full of code likely to throw that you might want to keep at arms length until you know you’re dealing with fully constructed objects. By factoring the creation out into a static method you also convey to both the compiler and the reader that they shouldn’t be messing with any of this object’s state at that point of its lifecycle, e.g.

 

public class ProcessManager

{

    public void ExecuteJob(Job job)

    {

        if (_process == null)

            _process = CreateProcess(_applicationName);

        . . .

        _process.Execute(job);

    }

 

    private static Process CreateProcess(string applicationName)

    {

        var process = new Process();

        process.Start(applicationName);

        return process;

    }

    . . .

    private Process _process;

}

Extension Methods

Whilst C# might not support “free functions”, it does have Extension Methods and these often embody the practice of writing small independent methods. They are members of a static class and are themselves declared static, despite the fact that they appear to be called as instance methods. If you ever wished that the C# String class had an instance method that could tell you whether a string was empty or just contained white-space, you can make it happen yourself:-

 

public static class StringExtensions

{

    public static bool IsBlank(this string value)

    {

        return (value.Trim().Length == 0);

    }

}

 

public static class Program

{

    public static int Main(string[] args)

    {

        . . .

        string connectionString = configuration.ConnectionString;

 

        if (!connectionString.IsBlank())

            connection = OpenConnection(connectionString);

        . . .

    }

}

 

This is often how I find extension methods come about. Initially they start as a simple static method in a class that looks suspiciously as though the first argument really wants to be “this”. Once reuse rears its head it’s a simple step to factor it out into a common extension method. Alternatively it could be pulled out as a formal extension method, but left defined inside a private static class of the current consumer to avoid publishing it formally as that comes with the possible burden of needing to write separate unit tests.

Building Classes from Static Methods

The Object-Orientated paradigm is good for creating types that represent things, but when it comes to algorithms and processes it can start to get ugly. Take the process of parsing a string of XML into a DOM for example. Whilst the input string can be an object, and the output is a tree of objects, the algorithm used to process the characters in the string and create the tree of objects feels much less object-like. It feels to me more like a function that transforms one to the other. Yes, there will be some temporal state involved during the processing, but by-and-large the decomposition of the problem has more of a focus on functions.

 

If I was using Test-Driven Development to tackle a problem like this my initial tests would very likely start out with just a simple function:-

 

[Test]

public void when_xml_is_empty_then_dom_is_empty()

{

    const string emptyXml = "";

    var document = XmlParser.ParseDocument(emptyXml);

    Assert.That(document, Is.Not.Null);

}

 

You can argue about whether it should be a member of a class called XmlDocument, or XmlReader, etc. but either way I wouldn’t start out expecting to create a class like this:-

 

[Test]

public void when_xml_is_empty_then_dom_is_empty()

{

    const string emptyXml = "";

    var parser = new XmlParser();

    var document = parser.ParseDocument(emptyXml);

    Assert.That(document, Is.Not.Null);

}

 

Anyone who uses FizzBuzz [3] or the Roman Numerals kata in their interview process to separate the “wheat from the chaff” will probably see this kind of thing. It’s not wrong, per-se, but it can lead to the kind of “empty” classes described above.

 

From a Design Pattern’s perspective what my eventual function will become is akin to a façade over a bunch of other functions. As the number of tests grow, so will the number of internal functions. Internal refactoring will start to push some of those out into separate (internal) classes which in turn will likely receive their own more focused unit tests. The handling of XML entity references earlier was very simplistic, just a lookup table, but as more scenarios are discovered so the complexity of that aspect of the implementation will likely increase.

 

The state required during parsing is entirely transient from the perspective of the caller. It could be held inside an instance of the XmlParser class, where the public class gets a private constructor because it just becomes an implementation detail of the static ParseDocument() method. But why even expose that, why not create an internal class, say, XmlParserImpl and treat ParseDocument() as a sort of top-level factory method?

 

internal class XmlParserImpl

{

    . . .

}

 

public static class XmlParser

{

    public static Dom ParseDocument(string xml)

    {

        var parser = new XmlParserImpl(xml);

 

        return parser.ParseDocument();

    }

}

 

If the implementation class is internal then the entire type is encapsulated and so we could hold the state as a Dumb Data Object [4] and access it in our static methods via public fields (so long as we promise never to expose it). Then again we could hold the state entirely on the stack by passing it as parameters to recursion functions.

Methods on Immutable Types

Jon Skeet raised a question at the Norfolk Developers Conference [5] about how to make methods on immutable types more intention revealing. His canonical example in C# involves the DateTime class like this:-

 

DateTime date = DateTime.Today;

date.AddDays(1);

 

The method name AddDays suggests that it will add 1 day to date and give us the date for tomorrow. Only it won’t. It will create a new DateTime value based on date (with an offset) which will subsequently be thrown away by the caller. It’s an easy mistake to make and one of the reasons why writing tests is such a worthy pursuit. The example should have been:-

 

DateTime date = DateTime.Today;

DateTime tomorrow = date.AddDays(1);

 

Jon went on to question whether there is a way to name methods to make this pattern (returning a new value instead of mutating the existing one) more revealing. He proposed this for the example above:-

 

DateTime tomorrow = date.PlusDays(1);

 

It is an improvement, but I would posit that it’s just too subtle a change in language to really make a difference. Part of the problem is that mutability is the default position in C#; for example the object initializer syntactic sugar relies on the class having writable properties.

 

My own stance is that once again we can draw from the functional side and use static methods (aka functions) to more clearly suggest that a new value will be created from an existing one and some adjustment:-

 

DateTime tomorrow = Date.AddDays(date, 1);

 

If you started making the same mistake above with this style, would it be any more obvious?

 

Date.AddDays(date, 1);

 

At least this way you start by invoking a static method and so that should tell you something extra that you don’t get by invoking an instance method.

Summary

The “static” keyword is in need of a public relations exercise in C# to try and overcome prejudices caused by misunderstanding its role. C# might have started out with a heavy bias towards the object-orientated paradigm but over the years its audience and the language have tried to embrace a multi-paradigm world. This means a stronger focus on immutability and the use of functions instead of objects and mutable state, at least for those problems where it’s beneficial.

 

The natural outcome of this is code that is easier to reason about and inherently thread-safe. Whilst another language such as F# might be a better tool for the job by removing some of the ceremony, there is no reason why you cannot adopt some of their practices to improve a C# codebase.

Acknowledgements

A debt of gratitude is owed to Ric Parkin and Roger Orr for helping me polish this article.

References

[1] https://twitter.com/codemonkey_uk/statuses/385518295958683649

[2] http://martinfowler.com/books/refactoring.html

[3] http://c2.com/cgi/wiki?FizzBuzzTest

[4] http://c2.com/cgi/wiki?DumbDataObject

[5] http://nordevcon.com/

 

Chris Oldwood

10 March 2014

 

Bio

Chris is a freelance developer who started out as a bedroom coder in the 80’s writing assembler on 8-bit micros; these days it’s C++ and C#. He also commentates on the Godmanchester duck race and can be contacted via gort@cix.co.uk or @chrisoldwood.