Quantitative Verification Benchmark Set
Information for Contributors
We welcome contributions that extend the Quantitative Verification Benchmark Set by adding new models, updating existing models, and adding new experimental results.
Before you submit, please carefully read and follow the general instructions as well as those specific for your type of contribution below.
If you have questions about contributing, contact .
How to submit
We accept contributions via as .zip files that contain the files being added or changed (and only those) as well as via GitHub pull requests.
Submissions via email will be committed to our Git repository in your name; please mention your full name and the email address of your GitHub account in your email.
For pull requests, we require all commits to be in your full name.
Every submission should only contain a single contribution, e.g. one new model, or an update to one existing model.
Licensing
All data contributed to the benchmark set will be redistributed under the terms of the open-access CC-BY 4.0 license.
Before submitting, make sure that you have the right to make your contribution publically available under this license and include a statement to this effect in your email.
If you submit via pull request, send an email (from the email address associated with your GitHub account) with such a statement to before creating the pull request.
Adding New Models
We welcome contributions of new models that extend the benchmark set.
Every model must be
-
formal, i.e. have a well-defined semantics in terms of discrete- or continuous-time Markov chains (DTMC or CTMC), Markov decision processes (MDP), Markov automata (MA) or probabilistic timed automata (PTA), and contain at least one probabilistic reachability or expected-reward property to be checked;
-
available in the JANI model format in addition to its original modelling formalism (see our list of existing JANI converters);
-
tested to work with one tool given the JANI translation and at least one parameter configuration, where the test is documented as an experimental result in the benchmark set; and be
-
maintained, i.e. the submitter must be willing and available to answer questions about the model and review update submissions, for a reasonable amount of years after the submission.
We aim for a large and diverse collection of interesting benchmark examples.
For this reason, before you decide to add a model, please consider why that model is interesting, and how it helps diversify the collection.
For example:
-
Is the model part of an established benchmark set?
-
Has it been developed to solve a challenging industrial problem?
-
Is the type of model – its modelling formalism or the kind of system it represents – underrepresented in the benchmark set?
-
Is it challenging to analyse due to its size, other structural aspects, or the properties to check?
Very briefly state this reasoning in the notes or challenge attribute of the model metadata.
Once you have established that you have the right to release the model under the terms of the CC-BY 4.0 license (see above), that it satisfies the above requirements, and that it is interesting, please do the following steps to contribute it to the benchmark set:
-
Clone the Git repository.
-
Choose a short name for your model.
Follow the short name patterns established by the existing models in the benchmark set.
-
In the folder corresponding to the mathematical semantics of your model (e.g. benchmarks/ma for Markov automata, etc.), create a new subfolder with the exact short name for your model.
-
Place the original model file and its JANI translation into that subfolder.
Use the "short" name of the model for the file names of all model files (original and JANI) wherever possible.
If a file contains hardcoded parameters, use the short name of the model, followed by a dot (.), followed by the hardcoded parameter values separated by dashes (-) in the order in which they appear on the benchmark set's website, followed by the file extension.
-
In the same subfolder, create a file index.json containing the metadata for your model.
Check the existing models for how to create the metadata file and what information to include.
-
Use tabs for indentation.
-
Use title capitalisation for the model's name.
-
Whenever possible, use a citable document for the source (i.e. link to a DOI in a format recognised by the benchmark set's website).
This link will be labelled "first presented in", and users of the benchmark set are asked to cite this document when using the model.
-
All references must be referred to in the description (use [1] to refer to the first reference, [2] for the second, etc.; use [0] to refer to the source).
-
All parameters of a model should be explained in the description.
In text, enclose parameter names in `´ (e.g. `N´ to refer to parameter N).
We distinguish between hard-coded file parameters and open parameters for which values can be specified at analysis time.
Provide some representative sets of values for the parameters.
-
All properties that are contained in the JANI file(s) must be listed with their corresponding name.
The type of each property must be either prob-reach (probabilistic reachability), exp-steps (expected number of steps), exp-time (expected time), exp-reward (expected accumulated reward), steady-state-prob (steady-state probability) or steady-state-reward (steady-state reward).
All but the steady-state property type identifiers can optionally be followed by -step-bounded, -time-bounded, or -reward-bounded to mark a bounded property.
-
Add a subfolder named results and include at least one result for one parameter configuration obtained from one tool given the JANI translation as input.
-
Add a reference to your model in file benchmarks/index.json.
(Keep the alphabetical ordering in that file.)
-
Test your additions by opening benchmarks/index.html in a browser.
In particular, check that all links work, and that all data in the models table is displayed correctly.
-
Submit via or GitHub pull request.
Updating Existing Models
Existing models within the benchmark set may be updated to correct modelling errors, add parameters, or add properties.
An update should make the least amount of changes necessary to achieve its goal.
These are the recommended steps to submit an update of an existing model:
-
Clone the Git repository.
-
Make changes to the files inside the model's subfolder.
-
Increment the model's version number and add an entry to its version-history.
-
Test your changes by opening benchmarks/index.html in a browser.
In particular, check that all links still work, and that all data in the models table is still displayed correctly.
-
Submit via or GitHub pull request.
Adding New Results
In addition to models, we also collect experimental results obtained by analysing the models with various tools.
The purpose of making these results available is to provide reference values for the properties included in the models as well as to allow showcasing and comparing the performance of different tools.
In general, the tools used should be publicly available so that all results can be independently replicated.
Follow these steps to submit new results:
-
Clone the Git repository.
-
Add the .json files and tool output logs describing your results to the results subfolder of the corresponding model.
The names of the files should be the name of the tool followed by a dot (.),
followed by the tool variant used and a dot (if any),
followed by all parameter values separated by dashes (-) in the order in which they appear on the benchmark set's website and a dot,
followed by the date (in YYYY-MM-DD format) of the results and a dot,
followed by the file extension.
For example: Storm.exact.3-4-3.2018-10-03.json.
Check existing results for how to create the .json file and what information to include.
-
Use tabs for indentation.
-
Include the number of the model version that the experiments were performed with.
-
Strictly follow the established patterns of describing the system (CPU, operating system, etc.) used.
For example, state the exact CPU model but do not include its frequency (as long as it can be deduced from the model) and any "(tm)" or (R)" symbols in the cpu field.
-
Specify the command as if the current working directory was the results subfolder and both your tool as well as the model files were included in that subfolder.
-
Report timing results in seconds with millisecond precision (i.e. three decimal digits).
-
If your experiments provide new information about the number of states for some parameter configurations, update the model's main index.json file accordingly.
-
Add or update the technical and performance data for the CPU used for your experiments in benchmarks/cpu-data.json.
-
Test your changes by opening benchmarks/index.html in a browser.
In particular, check that all links work.
-
Submit via or GitHub pull request.