Last update 04/05/2021

Automated testing

Current testing of SuperflexPy consists of validating its numerical results against the original implementation of Superflex. This testing is done for selected model configurations and selected sets of parameters and inputs.

This testing strategy implicitly checks auxiliary methods, including setting parameters and states, retrieving the internal fluxes of the model, setting inputs and getting outputs, etc..

The testing code is contained in folder test and uses the Python module unittest. The folder contains reference_results and unittest containing the scripts that run the tests.

Current testing covers:

  • Specific elements (reservoirs and lag functions) that are implemented in Superflex (e.g.,;
  • Multiple elements in a unit (e.g.,;
  • Multiple units in a node (e.g.;
  • Multiple nodes inside a network (e.g.;
  • Auxiliary methods, which are tested implicitly, i.e. assuming that errors in the auxiliary methods propagate to the results.

Current testing does not cover:

  • Elements for which numerical results are not available (e.g. some components of GR4J);
  • Usage of the Explicit Euler solver;
  • Edge cases (e.g. extreme values of parameters and states)

Users contributing SuperflexPy extensions should provide reference results and the code that tests them (including input data and model parameter values).

As the SuperflexPy framework continues to develop, additional facilities for unit-testing and integrated-testing will be employed.


Any push of new code to any branch on the github repository will trigger automatic testing based on the scripts contained in the folder test/unittest.