Coupling simulation programs at runtime is an effective method to extend specialized simulation programs or submodels by external functionality. It would also be futile to integrate any simulation functionality and all conceivable models into one gigantic simulation model. This has never worked out well in the past. That’s why there are numerous interfaces and middleware programs for simulation coupling.

Many of these interfaces are specific for the data exchange between 2 tools, or proprietary and thus unilaterally arbitrarily changeable (see e.g. s-Function interface of MATLAB etc.). For this reason, a new independent interface standard, the Functional Mock-Up Interface (FMI), was created between 2008 and 2011 in the multi-year MODELISAR research and development project. Since then, this standard has been systematically further developed and extended by important functionality.

FMI-Cross-Check Tests

The FMI standard defines a large number of function interfaces which can be addressed by the simulation master. In addition, the content and structure of the model description file modelDescription.xml is clearly defined. The defined runtime interface and model description interface must be equally correctly addressed by the simulation module (the FMU) and the simulation master. Both are checked differently for correct functionality.

Because it is difficult to perform classical model-model comparisons due to the very diverse functionalities of an FMI calculation model, extensive comparative testing is performed in a so-called cross-check. In this process, different FMUs (exported from different simulation programs) are combined with different FMI master programs. The calculation results obtained in this way potentially contain errors from the FMU and the master program. Now, if many results from many different FMU-master combinations match sufficiently well, the FMU export and import tools involved can be assumed to have correct functionality – at least as far as the tested range of functionality is concerned.

The results of these comparison tests are collected online in a cross-check repository.

Individual FMU Tests

To check whether an FMI simulation model (FMU) meets the standardized requirements, there is a defined test procedure:

  • Single test of an FMU with a test script that checks formal criteria and tests the model description.
  • Single test of an FMU with controlled execution of a test simulation using predefined external input variables.

The following test scripts/test programs for individual FMUs are available and perform these tests:

  • FMU Compliance Checker – A program (available for Windows, Linux, macOS) that checks an FMU for correctly formulated interface description and correct function implementation.
  • Online FMU Checker – Allows uploading an FMU and analyzing it for standard compliance.


In the individual FMU tests, only the simplest execution modes are tested. For example, for co-simulation FMUs, only the simple FMI 1.0 forward step mode with constant step lengths is tested. Important optional functions, e.g. the ability to save and reset the runtime state (an FMI 2.x functionality), or the correct application of input and output variable interpolation (also FMI 2.x) are not tested. However, these are sometimes critically necessary for models of practical complexity to perform stable and efficient simulation.

It remains a task for the next years to extend the existing test programs/scripts in this respect.

Simulation Master Tests

In addition to these individual tests, it is checked whether this FMU works correctly with other simulation master programs (hence the term cross-check). Here, a simple simulation scenario must be simulated, whereby the FMU is given predefined time series as input variables. The outputs generated by the FMU are compared with reference results.

An example of such a scenario validation is described in the following publication:

Nicolai A., Co-Simulation Master Algorithms – Analysis and Implementation Details Using MASTERSIM as an Example, Qucosa, 2018,

However, there are no systematic test criteria and test series for simulation master programs yet. Currently, the available functionality in FMI master programs is still very diverse and very differently well/completely implemented. This remains an open field for the next years to assure the accuracy of simulation results here.