SMock logo

A Mocking Framework for Runtime Verification Tools

Runtime Verification
Robustness and reliability are key factors in the success of the deployment of security-critical systems. Testing, fails to address these concerns adequately since not all possible execution paths are covered. On the other hand, verification - the analysis of all execution paths - is usually unfeasible due to the large state space of such systems. Runtime-verification addresses the issue by enabling formal verification of the system during its execution.

SMock
Two of the major challenges in runtime verification, which are crucial for its adoption in industry, is that of management of overheads induced through the monitoring and ensuring the correctness of the reported results. State-of-the-art runtime verification tools such as Java-MOP and tracematches have been tested on the DaCapo benchmark. However, the kind of properties which have been monitored are rather low level contrasting with our experience with industrial partners who are more interested in checking business logic properties. Whilst we had the chance to test our tool Larva on industrial case studies, such case studies are usually available for small periods of time and in limited ways due to confidentiality concerns. Relying solely on such case studies can be detrimental for the development of new tools which need substantial testing and analysis before being of any use.

To address this lack, we have built a configurable framework which may be used to mock transaction (We use "transaction" to refer to an independent, highly replicated unit of processing.) systems under different loads and usage patterns. The key feature of the framework is the ease with which one can define different types of transactions and farm out concurrent instances of such transactions through user-specified distributions. Profiling the overheads induced by different runtime verification tools, thus, enables easier benchmarking and testing for correctness of different tools and techniques under different environment conditions. Although not a replacement of industrial case studies, this enables better scientific evaluation of runtime verification systems.

SMock allows straightforward scripting of case studies, giving control over transaction behaviour, timing issues, load buildup, usage patterns, etc., which can be used to benchmark the correctness and performance of different runtime verification tools. A major issue SMock attempts to address is that of repeatability of the experiments, ensuring the scientific validity of the conclusions. To evaluate the use of the tool, we have used it to build a suite of benchmarks and compared the performance of Java-MOP and polyLarva under different mocked scenarios.

<< Semantics & Verification Research Group Home
SMock Papers
[1] SMock - A Test Platform for Monitoring Tools, Christian Colombo, Ruth Mizzi, and Gordon Pace. In RV, Rennes, France, 2013 (pdf)
The Toolkit



Downloads:
If you would like a copy of the tool, please fill in the following form, and we will send you the tool directly via email. Should you have any problems with the form, or fail to receive a response in a reasonable amount of time, please let us know via email.


Name:
Institution:
Address:
Email:
Help us fight spam:

Intended use and other comments:




The following is further material supporting the tool, and which may be downloaded directly:


Contact details
Christian Colombo
Department of Computer Science,
Faculty of ICT,
University of Malta,
Msida MSD2080,
MALTA

Tel: (+356) 2340 2640
E-mail: christian.colombo@um.edu.mt
Ruth Mizzi
E-mail: ruth.mizzi@gmail.com
Gordon Pace
Department of Computer Science,
Faculty of ICT,
University of Malta,
Msida MSD2080,
MALTA

Tel: (+356) 2340 2504
Fax: (+356) 2132 0539
E-mail: gordon.pace@um.edu.mt