Robustness and reliability are key factors in the success of the deployment of
security-critical systems. Testing, fails to address these concerns adequately
since not all possible execution paths are covered. On the other hand, verification
- the analysis of all execution paths - is usually unfeasible due to the large
state space of such systems. Runtime-verification addresses the issue by enabling
formal verification of the system during its execution.
Two of the major challenges in runtime verification, which are crucial for its adoption in industry, is that of management of overheads induced through the monitoring and ensuring the correctness of the reported results.
State-of-the-art runtime verification tools such as Java-MOP and tracematches have been tested on the DaCapo benchmark. However, the kind of properties which have been monitored are rather low level contrasting with our experience with industrial partners who are more interested in checking business logic properties. Whilst we had the chance to test our tool Larva on industrial case studies, such case studies are usually available for small periods of time and in limited ways due to confidentiality concerns. Relying solely on such case studies can be detrimental for the development of new tools which need substantial testing and analysis before being of any use.
To address this lack, we have built a configurable framework which may be used to mock transaction (We use "transaction" to refer to an independent, highly replicated unit of processing.) systems under different loads and usage patterns. The key feature of the framework is the ease with which one can define different types of transactions and farm out concurrent instances of such transactions through user-specified distributions. Profiling the overheads induced by different runtime verification tools, thus, enables easier benchmarking and testing for correctness of different tools and techniques under different environment conditions. Although not a replacement of industrial case studies, this enables better scientific evaluation of runtime verification systems.
SMock allows straightforward scripting of case studies, giving control over transaction behaviour, timing issues, load buildup, usage patterns, etc., which can be used to benchmark the correctness and performance of different runtime verification tools. A major issue SMock attempts to address is that of repeatability of the experiments, ensuring the scientific validity of the conclusions. To evaluate the use of the tool, we have used it to build a suite of benchmarks and compared the performance of Java-MOP and polyLarva under different mocked scenarios.