Javascript Testing on Travis CI with Karma and Mocha

Posted November 24, 2017 by Emily Dolson in Software Development / 0 Comments

When working on large software projects, automated testing dramatically reduces the risk of unknown bugs creeping into your code. That’s why I’m a huge fan of services like Travis CI, that automatically test all code that is pushed to a git repository. However, some code is a lot easier to write automatable tests for than others. Recently, I’ve been working on testing some code that has proved very challenging to test: data visualization code compiled from C++ to Javascript via Emscripten. Now that I’ve got it working, I’m writing this post to document how, in hopes that it will save others a lot of trouble. My ultimate solution was to use the Mocha testing framework and the Karma test runner, but I’m going to start out by talking about a few other things I tried and why they didn’t work. If you’re using normal Javascript (i.e. not compiled from C++) and just want to read about how to configure things, feel free to skim this next section.

Choice #1: Which testing framework to use?

Fundamentally, tests should just be code that fails to run if the underlying library you’re testing doesn’t work the way its supposed to. Usually, you don’t need a testing framework for this – assert statements would be sufficient. However, testing frameworks are usually a good idea – they provide functions that give you more information than just an assertion error, they helpfully summarize the results of your tests, and they often make your tests much easier to read. Many also provide ways to automatically run all of your tests (i.e. a test runner). In Javascript, a testing framework is particularly important, because there isn’t a standardized assert function that works across all browsers.

Since the code I’m testing only works when compiled to Javascript, I concluded that a Javascript testing framework was the way to go here. There are two main options there: Mocha and Jasmine. I got the mild impression from some google searches that Mocha handles testing asynchronous code better than Jasmine. Since some of my code was asynchronous (there are functions that don’t necessarily finish running before the next line of code is called), I started with Mocha. It worked great, so I never really investigated Jasmine further. Mocha is tightly integrated with another library, Chai, which handles letting you make assertions (or otherwise tell the code how it should know that a test passed). Technically you could use a different library in place of Chai, but I saw no reason to. Chai lets you choose from three different syntaxes for making assertions. Since this code was part of a C++ project, I went with the most C++-like one, to minimize confusion.

Emscripten-specific note: This may or may not be the best approach, but the best way I’ve found so far to write my tests is to write functions in C++, use Empirical‘s JSWrap function to convert them to Javascript, and then call them from a Javascript block in which I test that they worked. Here’s an example in which I’m testing code that uses the Empirical library to make a website a draw a line graph on it:


// emp::web::Documents and things that go on them always have to be global
emp::web::Document doc("line_graph");
emp::web::LineGraph<std::array<double, 2> > line_graph("x", "y", 500, 250);

// Define test function
void MakeLineGraph(std::string callback) {
    // Put line_graph on document (a div on the web page)
    doc << line_graph;
    // Tell the line graph to call a function with the name
    // stored in callback when its done drawing data
    line_graph.SetDrawCallback(callback);
    // Load and graph some example data
    line_graph.LoadDataFromFile("/base/tests/test-data/test-line-graph.csv");
};

int main() {

    // Wrap function to be tested - we can now call it 
    // from Javascript with emp.MakeLineGraph(callback);
    emp::JSWrap(MakeLineGraph, "MakeLineGraph");

    // Start Javascript block
    EM_ASM({

        // Mocha tests always start with a "describe block" 
        // that lists the name of the object we're testing
        describe('Line Graph', function() {

            // Optional function that gets run before test functions.
            // We're using it to set up the line graph that we're testing
            before( function(done) {
                // "MakeLineGraph" is an asynchronous function. "done" is a function 
                // that we call to let mocha know the asynchronous function is done.
                // Store the done function in the emp object for easy access later.
                emp.done = done;
                // Call our test function and tell it that the 
                // callback can be found in emp.done
                emp.MakeLineGraph("done");
            });

            // Actual tests are created with "it". The first argument to "it"
            // is a string describing what the test
            // is testing. By convention, these are phrased as how the 
            // code "should" behave if everything is working
            it('should have data-points for each piece of test data', function() {
                var data_points = d3.select("#line_graph").selectAll(".data-point");
                chai.assert.equal(data_points[0].length, 5);
                chai.assert.deepEqual(data_points.data(), [[1, 5], [2, 3], [3, 6], [4, 1], [5, 10]]);
            });

            it('should have a line connecting the data points', function() {
                var path = d3.select("#line_graph").selectAll(".line-seg").attr("d");
                chai.assert.equal(path, "M60,110L162.5,150L265,90L367.5,190L470,10");
            });

        });
    });
}

Choice #2: What environment will you run tests in?

If your code doesn’t have any kind of visual component, doesn’t manipulate the web-page it’s assumed to be running on, and in general doesn’t do anything strange: congratulations – you can just run it on the command line in node! There are many tutorials on the internet for using Node, so I’m not going to address that option here. What if your code does care about the fact that it’s on a web page (for instance, it interacts with the underlying Document Object Model (DOM), as my d3 code above does)? You have a few options. First, you can try to emulate a browser environment in node, using a library like jsdom or benv. I could not for the life of me get either of those to work with my code. My best guess is that it’s because this code tries to use libraries like d3 and jquery before main starts and I actually have a chance to load them. Another contributing factor may be that loading them into the window so that you can use them without any syntax changes seems to be incredibly finicky. I eventually gave up on these options, but your mileage may vary.

The alternative is to actually run your code in a browser. By far the best browser for the job seems to be PhantomJS, which acts like a normal browser but doesn’t try to actually display any graphics (i.e. it’s “headless”). Unfortunately, PhantomJS doesn’t support typed arrays, and typed arrays are clearly one of Emscripten’s favorite Javascript features. So running Emscripten-compiled Javascript in PhantomJS appears to be a no-go, at least for now. That leaves one more option: regular old Firefox. Firefox is loaded by default on Travis CI, and they have nice instructions for making it run. With nowhere else to turn, this is the option I went with.

Choice #3: How are you going to actually run your tests?

If we could run our tests in node, we could just invoke mocha from the command line and be done. Now that we’re in Firefox, though, things aren’t quite so simple. Mocha has great examples of writing an html file to run your tests, but it only shows the results in the browser, which isn’t much help when you’re running your tests on Travis CI and can’t actually see the browser. Additionally, if you want your Javascript to access other files in your repo (example data files, for instance), you’ll need to start up a server of some kind to host your files so you don’t get a “cross-origin domain request” error. Fortunately, there are two programs designed to solve all of these problems. Unfortunately, neither are perfect.

Testee

The first is called Testee. Testee is wonderfully straightforward to get working. Make a TestRunner html file as described in the mocha documentation, but replace:

  <script>
    mocha.checkLeaks();
    mocha.run();
  </script>

with

 <script>
    onload = function(){
        onload = mocha.checkLeaks();
        mocha.run();{
    };
</script>

to ensure that the tests run when the page loads.

Then call testee TestRunner.html --browsers firefox. A server will start, a Firefox window will open up and run your code, and the results will get printed to the terminal (note that you do need to run testee from a directory that contains all of the files you want to access so that they will all be served). Exactly what we wanted, right? Almost. We can run this on Travis, with the following .travis.yml file (this assumes that make handles all necessary compilation and installation of dependencies, including testee, which can be achieved with npm install):


before_script:
- export DISPLAY=:99.0
- sh -e /etc/init.d/xvfb start
- sleep 3 # give xvfb some time to start
script:
- make
- testee TestRunner.html --browsers firefox

However, when we do, it's possible that the program will just hang when it gets to running Testee. That's because, if Testee encounters an error in the wrong place, the error will stop everything from running. So the tests won't finish and give control back to the shell. Worse, the browser will eat the error message. This makes debugging incredibly challenging if you ever get an error on Travis that you can't replicate on your local set-up. Since I encountered a few such errors even in an incredibly simple test file, I decided that the ease of use wasn't going to be worth the pain of debugging. So I switched to Karma.

Karma

Karma requires a lot of configuration. You can achieve some of this by running karma init and stepping through its wizard. I still found myself having to change a lot of things manually, though. The most important thing to understand is that karma is building its own html file, so it only wants Javascript files from you, not html files (there is an html2js preprocessor that's supposed to take care of them, but I couldn't get it to work). Here is my karma.conf.js file:


// Karma configuration
// Generated on Mon Oct 03 2016 15:06:06 GMT-0400 (EDT)

module.exports = function(config) {
  config.set({

    // base path that will be used to resolve all patterns (eg. files, exclude)
    // This file doesn't live in the root directory of my project, but it's easier to
    // specify paths from there. So the base path says all paths should start by going to levels
    // up (because this file is stored two levels down)
    basePath: '../../',

    // frameworks to use
    // available frameworks: https://npmjs.org/browse/keyword/karma-adapter
    frameworks: ['mocha'],

    // list of files / patterns to load in the browser
    files: [
      {pattern: 'examples/web/jquery-1.11.2.min.js'},
      {pattern: 'third-party/node_modules/mocha/mocha.js'},
      {pattern: 'third-party/node_modules/chai/chai.js'},
      {pattern: 'web/d3/d3.min.js'},
      {pattern: 'web/d3/d3-tip.js'},
      {pattern: 'tests/web/test_header.js'},
      {pattern: 'tests/test-data/lineage-example.json', included: false},
      {pattern: 'tests/test-data/test-line-graph.csv', included: false},
      {pattern: 'tests/web/test_visualizations.js.map', included: false},
      {pattern: 'tests/web/test_visualizations.js'}
    ],


    // list of files to exclude
    exclude: [
    ],


    // preprocess matching files before serving them to the browser
    // available preprocessors: https://npmjs.org/browse/keyword/karma-preprocessor
    preprocessors: {
    },


    // test results reporter to use
    // I installed the spec reporter as a plug-in because it prints much easier to read results to the command line
    reporters: ['spec'],


    // web server port
    port: 9876,


    // enable / disable colors in the output (reporters and logs)
    colors: true,


    // level of logging
    // possible values: config.LOG_DISABLE || config.LOG_ERROR || config.LOG_WARN || config.LOG_INFO || config.LOG_DEBUG
    logLevel: config.LOG_INFO,


    // enable / disable watching file and executing tests whenever any file changes
    autoWatch: false,


    // start these browsers
    // available browser launchers: https://npmjs.org/browse/keyword/karma-launcher
    browsers: ['Firefox'],


    // Continuous Integration mode
    // if true, Karma captures browsers, runs the tests and exits
    // This needs to be on to work in Travis CI
    singleRun: true,

    // Concurrency level
    // how many browser should be started simultaneous
    concurrency: Infinity
  })
}

Notes about this file:

  • The most important thing to get right is the list of files in "files." It should include your test code (as a Javascript file), any libraries you're including (as Javascript files), and any files that any of those files need to access (such as .js.map files or data files). Files that do not contain Javascript code should have the "included" flag set to "false." All files that have it set to true (the default) will be included in "script" tags, just as if you were including them into an html file. Files with "included" set to "false" will be available on the server but not included into the html.
  • What if you need to have certain html elements in existence before your test file starts running? The best workaround I found is to write Javascript to create them and include the file containing that Javascript file (that's the test_header.js file in my config example). Always make sure to include your actual test files last, so everything else is already set up.
  • When you're going through the configuration wizard, Karma will give you the option to load all of your required libraries with require.js instead of by including them in the list of files. If you can make that work, more power to you.
  • Note that there are no plugins specified. This is because karma automatically includes all npm modules that start with karma and are stored in the same node_modules directory as karma. So as long as you install all of your dependencies to the same place, you shouldn't need to worry about listing plug-ins.
  • Some of the libraries (mocha and chai) being loaded were installed with npm as described below, while others (jquery and d3) are stored elsewhere in the library.

You can ensure that your dependencies get installed correctly by creating a file called package.json which lists the names and versions of all of your dependencies. Then, from the directory containing package.json, type npm install to install all of the packages you need. As an example, here's my package.json file:


{
  "name": "example",
  "version": "1.0.0",
  "description": "example",
  "dependencies": {
      "mocha" : "^3.1.0",
      "chai" : "^3.5.0",
      "karma" : "^1.3.0",
      "karma-chai" : "^0.1.0",
      "karma-firefox-launcher" : "^1.0.0",
      "karma-mocha" : "^1.2.0",
      "karma-spec-reporter" : "^0.0.26"
  },
  "author": "Devosoft",
  "license": "MIT"
}

Once everything is set up, you can run karma with karma start karma.conf.js (note that you may need to specify a path to the karma execuctable or the configuration file). To run it on Travis, you can use the same configuration as Testee, calling Karma instead of Testee.

And there you have it! Automated in-browser testing of Emscripten-compiled Javascript graphics code.

Emily Dolson

I’m a doctoral student in the Ofria Lab at Michigan State University, the BEACON Center for Evolution in Action, and the departments of Computer Science and Ecology, Evolutionary Biology, & Behavior. My interests include studying eco-evolutionary dynamics via digital evolution and using evolutionary computation techniques to interpret time series data. I also have a cross-cutting interest in diversity in both biological and computational systems. In my spare time, I enjoy playing board games and the tin whistle.

More Posts - Website - Twitter

Leave a Reply