Blog

  • Writing Custom Constraints in NUnit

    NUnit supports several ways to assert against data. The recommended one, constraint-based, allows for a more naturally-read syntax which supports the flexible chaining of conditions to prove simple or complex facets about the system.

    It follows the “Assert that” format:

    Assert.That(actualValue, Is.EqualTo(expectedValue));
    

    With the following components (listed left to right):

    • actualValue : The actual output of the system under test which you wish to validate
    • Is : A starting clause. The most common is Is but NUnit also defines Has, Does, and others to allow for readable tests
    • EqualTo() : A example function which returns the contraint which validates your data. EqualTo returns an instance of an EqualConstraint class which will internally contains validation logic. Other examples include LessThan() or Even.
    • expectedValue : The data to compare actualValue to using the rules defined by the constraint.

    NUnit also contains built-in operators. For example, checking inequality is a matter to prepending the Not operator in front of EqualTo():

    Assert.That(actualValue, Is.Not.EqualTo(expectedValue));
    

    Similarly, if a situation requires check a characteristic of a value instead of comparing then a unary constraint like Even could be used:

    Assert.That(actualValue, Is.Even);
    

    The built-in constraints will likely meet 99.9% of use cases, but there may be some domain-specific rules which aren’t covered out-of-the-box. For example, a math-oriented program may wish to validate that a number is prime. It would be very nice if a test could be written to read:

    Assert.That(actualValue, Is.Prime);
    

    NUnit supports this through custom constraints. Custom constraints are classes which extend NUnit’s own Constraint class.

    A PrimeConstraint may look like:

    public class PrimeConstraint : Constraint
    {
        public override string Description => "A prime value";
        public override ConstraintResult ApplyTo<TActual>(TActual actualValue)
        {
            var actualInt = Convert.ToInt32(actualValue);
            ArgumentOutOfRangeException.ThrowIfLessThanOrEqual(actualInt, 0, nameof(actualInt));
            for (int i = 2; i <= (int)Math.Sqrt(actualInt); i++)
            {
                if (actualInt % i == 0)
                {
                    // Not prime if we've found an evenly divisible factor
                    return new ConstraintResult(this, actualValue, false);
                }
            }
            return new ConstraintResult(this, actualValue, true);
        }
    }
    

    In addition to extending from the Constraint class, the class must implement an ApplyTo<TActual>() method which will validate the actualValue originally passed into Assert.That(). Hooking this into the NUnit syntax tree is then very easy thanks to the new C# 14 extension members feature. Adding a new function onto NUnit’s static Is class and adding a new property onto the ConstraintExpression class can be achieved in one line each.

    public static class ConstraintExtensions
    {
        extension(NUnit.Framework.Is)
        {
            public static Constraint Prime => new PrimeConstraint();
        }
        extension(ConstraintExpression expression)
        {
            public Constraint Prime => expression.Append(new PrimeConstraint());
        }
    }
    

    And that’s it. They can be used in tests seamlessly afterwards as if they were part of NUnit itself.

    [Test]
    public void Test1()
    {
        Assert.That(5, Is.Prime);
        Assert.That(4, Is.Not.Prime);
    }
    

  • Using Cooperative Cancellation in long-running tests

    NUnit 4 has added support for cooperatively cancelling long-running tests in several scenarios. Cooperative cancellation is the preferred way to end any long-running operation in .NET as it allows for the graceful ending of operations, however it requires explicit coordination by calling code to do this. This coordination is handled, in part, by the passing of a CancellationToken into the potentially long-running method. The cancellation token can be signaled to a “cancelled” state to allow the long-running operation to react and gracefully end itself.

    For example, the below code which must get an external resource over HTTP where a cancellationToken is passed in as the last parameter. This GetAsync() method will exit early when the operation is cancelled.

            var httpClient = new HttpClient();
            await httpClient.GetAsync("https://server", cancellationToken);
    

    But where does this cancellationToken come from? It’s possible to construct the token and manage it from a CancellationTokenSource yourself, however frameworks will often have a mechanism to do this for you.

    NUnit supports cooperative cancellation in a few ways, the simplest of which is through the CancelAfter attribute. This attribute will indicate to NUnit that it should manage a cancellation token on behalf of the test. The cancellation token itself can be used by the test in one of two ways:

    Read from the test context:

            [Test]
            [CancelAfter(CooperativeTimeoutMilliseconds)]
            public async Task WithCooperativeCancellation_Context()
            {
                var delay = TimeSpan.FromMilliseconds(600);
                var timer = Stopwatch.StartNew();
                await Task.Delay(delay, TestContext.CurrentContext.CancellationToken);
                timer.Stop();
    
                var expectedDelay = Math.Min(delay.Milliseconds, CooperativeTimeoutMilliseconds);
    
                Assert.That(timer.ElapsedMilliseconds, Is.EqualTo(expectedDelay).Within(50));
            }
    

    Passed by NUnit as an argument into the method:

            [Test]
            [CancelAfter(CooperativeTimeoutMilliseconds)]
            public async Task WithCooperativeCancellation_Argument(CancellationToken cancellationToken)
            {
                var delay = TimeSpan.FromMilliseconds(600);
                var timer = Stopwatch.StartNew();
                await Task.Delay(delay, cancellationToken);
                timer.Stop();
    
                var expectedDelay = Math.Min(delay.Milliseconds, CooperativeTimeoutMilliseconds);
    
                Assert.That(timer.ElapsedMilliseconds, Is.EqualTo(expectedDelay).Within(50));
            }
    

    Both conventions are supported by NUnit and will allow a long-running test to respond to and gracefully cancel any long-running tasks in flight.

  • Retrying Tests on Exception in NUnit

    Tests, especially unit tests, should be reliable and reproducible. System or integration tests can, however, exercise many different parts of a codebase or even a network. This increase in the number of moving pieces can potentially lead to decreased test reliability. NUnit’s Retry attribute was added to support cases where a developer may believe a test could have a reasonable chance at passing if retried after an initial failure.

    For example, the below test will fail about half the time it is run, but it will be run up to 3 times before being finally marked as failed in the run.

    [Test, Retry(3)]
    public static void TestRandomlyEven()
    {
        Assert.That(Random.Shared.Next(), Is.Even);
    }
    

    These retried failures, however, have only been for when a test fails an assertion. Test failures due to unhandled exceptions are not retried. So the below test will also fail about half the time it is run, but those failures will not be retried and the test will immediately be treated as failed.

    [Test, Retry(3)]
    public static void TestRandomlyEven()
    {
        if (Random.Shared.Next() % 2 == 0)
            throw new InvalidOperationException("Odd number.");
        Assert.Pass();
    }
    

    This may be desirable based on how a test is written and if you want to be alerted to any instability or potential flakiness. There may be other cases, such as detailed system integration tests, where one may want network or database exceptions to be retried. Writing a custom attribute to support this is quite easy as most of the pieces are already exposed as public types within NUnit. The below was written against NUnit 4.4 but should work on any lower version too.

    [Test, RetryOnExceptionAttribute(3)]
    public static void TestRandomlyEven()
    {
        if (Random.Shared.Next() % 2 == 0)
            throw new InvalidOperationException("Odd number.");
        Assert.Pass();
    }
    
    public class RetryOnExceptionAttribute(int tryCount) : NUnitAttribute, IRepeatTest
    {
        public TestCommand Wrap(TestCommand command)
        {
            return new RetryAttribute.RetryCommand(new FailOnExceptionCommand(command), tryCount);
        }
    
        private class FailOnExceptionCommand(TestCommand innerCommand) : DelegatingTestCommand(innerCommand)
        {
            public override TestResult Execute(TestExecutionContext context)
            {
                try
                {
                    return innerCommand.Execute(context);
                }
                catch (Exception ex)
                {
                    context.CurrentResult.SetResult(ResultState.Failure, ex.Message, ex.StackTrace);
                    context.CurrentResult.RecordTestCompletion();
                    return context.CurrentResult;
                }
            }
        }
    }
    

    Fortunately the ability to retry on specific exceptions will be available as part of NUnit 4.5. A new RetryExceptions property will be added to the attribute that can be given an array of Types of exceptions to retry.

    [Test, Retry(3, RetryExceptions = [typeof(InvalidOperationException))]
    public static void TestRandomlyEven()
    {
        if (Random.Shared.Next() % 2 == 0)
            throw new InvalidOperationException("Odd number.");
        Assert.Pass();
    }
    

    The entries are treated as base classes so retrying all exceptions will become as simple as:

    [Test, Retry(3, RetryExceptions = [typeof(InvalidOperationException))]
    public static void TestRandomlyEven()
    {
        if (Random.Shared.Next() % 2 == 0)
            throw new InvalidOperationException("Odd number.");
        Assert.Pass();
    }
    

    A big thank you to manfred-brands for having added this feature recently.

  • Async Enumerable Test Sources in NUnit

    In my previous post I showed how to use awaitable TestCase or Value sources in NUnit 3.14. NUnit 4 continues the story of adding async support by also allowing TestCaseSource, ValueSource, or TestFixtureSource to return an IAsyncEnumerable.

    public class AsyncTestSourcesTests
    {
        [TestCaseSource(nameof(MyMethodAsync))]
        public void Test1Async(MyClass item)
        {
        }
    
        public static async IAsyncEnumerable<MyClass> MyMethodAsync()
        {
            using var file = File.OpenRead("Path/To/data.json");
            await foreach (var item in JsonSerializer.DeserializeAsyncEnumerable<MyClass>(file))
            {
                yield return item;
            }
        }
    
        public class MyClass
        {
            public int Foo { get; set; }
            public int Bar { get; set; }
        }
    }
    

    As with IEnumerable-backed sources, NUnit will lazily enumerate the collection to avoid bringing all the objects into memory at once, instead only generating the test or value sources when needed.

    Many async enumerable operations also require async disposal of the underlying resource after enumerating the collection. NUnit will also take care of calling the dispose method to ensure everything is cleaned up properly. In the event that both DisposeAsync and Dispose methods are present then NUnit will only call the asynchronous DisposeAsync method and not the Dispose method.

  • Async Test Sources in NUnit

    NUnit has long supported the definition of test cases in numerous forms, including via inline primitive data via the TestCaseAttribute or potentially more complex data returned from a method, property, or other source at runtime via TestCaseSourceAttribute. The latter has typically only supported synchronous methods. This complicated defining data-driven test cases where an internal operation required calling a Task-based API. A common example of this could be a test case which reads from a JSON source file or other stream using a method like JsonSerializer.DeserializeAsync().

    In the past this would mean an awkward and unnatural call using something like .GetAwaiter().GetResult():

    public class Tests
    {
        [TestCaseSource(nameof(MyMethod))]
        public void Test1(MyClass item)
        {
        }
    
        public static IEnumerable<MyClass> MyMethod()
        {
            using var file = File.OpenRead("Path/To/data.json");
            var t = JsonSerializer.DeserializeAsync<IEnumerable<MyClass>>(file).AsTask();
    
            return t.GetAwaiter().GetResult();
        }
    }
    

    NUnit 3.14 was released a few months ago and included support for “async” or task-based test case sources. Now a TestCaseSource can target a Task-returning method to allow for much more natural code:

    public class Tests
    {
        [TestCaseSource(nameof(MyMethodAsync))]
        public void Test1Async(MyClass item)
        {
        }
    
        public static async Task<IEnumerable<MyClass>> MyMethodAsync()
        {
            using var file = File.OpenRead("Path/To/data.json");
            return await JsonSerializer.DeserializeAsync<IEnumerable<MyClass>>(file);
        }
    }
    

    The above example focuses on Task, but any awaitable type such as ValueTask or a custom awaitable also works. Other “source” attributes such as TestFixtureSource or ValueSource are supported as well.

  • Creating a Grunt plugin on Windows in MINGW

    I’ve been working with grunt a bit lately, and have found the need for a task that there doesn’t seem to be in the expansive list of existing plugins. So I thought I’d create my own. Fortunately, grunt has a great and simple page on doing this (http://gruntjs.com/creating-plugins).

    Unfortunately, I ran into some issues.

    I usually like to work in the Git Bash shell that comes with MINGW. Trouble is, this was causing some pathing issues. Specifically, with these two commands:

    1. Install the gruntplugin template with git clone git://github.com/gruntjs/grunt-init-gruntplugin.git ~/.grunt-init/gruntplugin(%USERPROFILE%\.grunt-init\gruntplugin on Windows).
    2. Run grunt-init gruntplugin in an empty directory.

    Apparently MINGW, or at least my version, has some issues resolving %USERPROFILE%. So I ended up with a cloned git repo in my local directory called %USERPROFILE%.grunt-initgruntplugin. After fixing that and moving it to my root user profile I kept geting an “EINVAL” issue on the next command. I figured this had to be a pathing issue too. So I dropped out of MINGW into a cmd shell by typing cmd. Except that didn’t quite do it. Maybe MINGW was intercepting input and filtering it in an unexpected way, but my problems became even worse. So my fix?:

    1. Add git to your OS Path variable (C:\Program Files\Git\bin)
    2. Run a regular command shell (cmd outside of MINGW)

    With those two small changes, everything worked flawlessly.

  • Determining the version of MINGW

    As a Windows developer who uses git and gcc, I found it easiest to install MINGW to help work in a console (Git Bash here is a fantastic shell extension!). Unfortunately, it’s been a while since I installed it and I forget the version I’m using. After a bit of googling, it seems someone figured out years ago how to figure this out in a shell script (stahta01)

    http://forums.codeblocks.org/index.php?topic=9054.0

    Just so I don’t have to go searching again for it, I’ve copied his/her script and included it below:

    @echo off
    REM version-of-mingw.bat
    REM credit to Peter Ward work in ReactOS Build Environment RosBE.cmd it gave me a starting point that I edited.
    ::
    :: Display the current version of GCC, ld, make and others.
    ::
    
    REM %CD% works in Windows XP, not sure when it was added to Windows
    set MINGWBASEDIR=C:\MinGW
    REM set MINGWBASEDIR=%CD%
    ECHO MINGWBASEDIR=%MINGWBASEDIR%
    SET PATH=%MINGWBASEDIR%\bin;%SystemRoot%\system32
    if exist %MINGWBASEDIR%\bin\gcc.exe (gcc -v 2>&1 | find "gcc version")
    REM if exist %MINGWBASEDIR%\bin\gcc.exe gcc -print-search-dirs
    if exist %MINGWBASEDIR%\bin\c++.exe (c++ -v 2>&1 | find "gcc version")
    if exist %MINGWBASEDIR%\bin\gcc-sjlj.exe (gcc-sjlj.exe -v 2>&1 | find "gcc version")
    if exist %MINGWBASEDIR%\bin\gcc-dw2.exe (gcc-dw2.exe -v 2>&1 | find "gcc version")
    if exist %MINGWBASEDIR%\bin\gdb.exe (gdb.exe -v | find "GNU gdb")
    if exist %MINGWBASEDIR%\bin\nasm.exe (nasm -v)
    if exist %MINGWBASEDIR%\bin\ld.exe (ld -v)
    if exist %MINGWBASEDIR%\bin\windres.exe (windres --version | find "GNU windres")
    if exist %MINGWBASEDIR%\bin\dlltool.exe (dlltool --version | find "GNU dlltool")
    if exist %MINGWBASEDIR%\bin\pexports.exe (pexports | find "PExports" )
    if exist %MINGWBASEDIR%\bin\mingw32-make.exe (mingw32-make -v | find "GNU Make")
    if exist %MINGWBASEDIR%\bin\make.exe (ECHO It is not recommended to have make.exe in mingw/bin)
    REM ECHO "The minGW runtime version is the same as __MINGW32_VERSION"
    if exist "%MINGWBASEDIR%\include\_mingw.h" (type "%MINGWBASEDIR%\include\_mingw.h" | find "__MINGW32_VERSION" | find "#define")
    if exist "%MINGWBASEDIR%\include\w32api.h" (type "%MINGWBASEDIR%\include\w32api.h" | find "__W32API_VERSION")
    
    :_end
    PAUSE
    

    On my machine, it outputs exactly what I needed:

    MINGWBASEDIR=C:\MinGW
    gcc version 4.8.1 (GCC) 
    gcc version 4.8.1 (GCC) 
    GNU gdb (GDB) 7.6.1
    GNU ld (GNU Binutils) 2.24
    GNU windres (GNU Binutils) 2.24
    GNU dlltool (GNU Binutils) 2.24
    GNU Make 3.82.90
    #define __MINGW32_VERSION           3.20
    #define __W32API_VERSION 3.17
    Press any key to continue . . . 
    
  • Nod of the hat to integrating Popcorn js and BBB (Big Blue Button)

    It looks like a few people have been hitting my blog trying to find information on integrating Popcorn.js and Big Blue Button. I thought I’d take the opportunity to give a nod of the hat to a colleague, dseif, for his recent contribution towards making this possible at Hackanooga.

    THIS LINK has all the cool details.

  • Online and on Popcorn Parsing

    After a hiatus from the internet that seemed far longer than the month it actually was, I’m back online.

    I’m looking to continue my work on popcorn.js’s parser support, specifically with cleanup and adding styling support. After a refactoring and code preparation for what is to come, I’m ready and read up enough to begin. Of the three parsers in popcorn.js which support in-spec styles, I’ve decided to focus on the TTML parser over the other leading candidate for a first convert, SSA/ASS.

    As with all the parser styling support, this will be a task of mapping the spec styles to CSS. While the TTML spec is significantly (read: magnitude of 10) larger than the SSA/ASS spec, it should be easier because the style names and behaviours are so similar to what I have to work with in the browser (JavaScript and CSS). In fact, near the beginning of the TTML spec Styling section, the W3C advises:

    In particular since [CSS2] is a subset of this model, a CSS processor may be used for the features that the models have in common.

    That’s perfect! TTML is an XML-based format, so parsing is already made easier by JavaScript’s XML and DOM facilities. This means that after extracting what I need to, I can minimize the work involved with mapping and validating style names and attributes, instead passing them through to the browser for validation and processing. It’s not all easy, however. Some extra rules will need to validated, such as style inheritance from other styles, invalid/inaccessible inheritance, and ensuring styles are applied to the appropriate elements.

    I’ve already started by parsing some basic region and style data (TTML’s equivalent to CSS classes), and structuring some unit tests. What remains at this point is extracting inline styles, and applying all styles to the displayed text. And, of course, validation rules, further unit tests, demos, and tackling any as-of-yet-unforeseen issues. It’s already looking to be a fun, wild project.

    Look for further developments!

  • Combining Git Commits: Git Squash

    I often work by committing small, incrementally stable portions of my work to my working branch, then pushing the entire thing when it’s done. While great for working, this makes for a very cluttered commit history. Not only that, but it also makes it more difficult for peer reviewers to know what changes you’re trying to commit and contribute. To address this, I did the lazy (yet ironically more work) step of making a copy of my files, making a clean working branch, and applying my accumulation of changes in one copy-paste operation. In other words, I was manually squashing my many commits into one.

    Turns out git has this capacity built in. I knew of it, but I never bothered to look into it. Turns out, it was simple, and could’ve saved me a lot of time. This blog post outlines the process:

    git rebase -i HEAD~4

    With that command to begin interactively picking and squashing the last 4 commits. This is done by changing the commit log from picking the last four commits separately to squashing three onto the most recent. So in the interactive editor, this:

    pick 01d1124 Adding license
    pick 6340aaa Moving license into its own file
    pick ebfd367 Jekyll has become self-aware.
    pick 30e0ccb Changed the tagline in the binary, too.

    Becomes this:

    pick 01d1124 Adding license
    squash 6340aaa Moving license into its own file
    squash ebfd367 Jekyll has become self-aware.
    squash 30e0ccb Changed the tagline in the binary, too.

    Just save that, adjust the rebased commit message, and all is well.