Assessing Model Quality
(based on [FR04b])
Students in software engineering courses that cover modeling often ask some variant of the following question: “How do I know that my model is a good model?”. It is not easy to provide a satisfactory response to this question. Good instructors provide students with some criteria and guidelines in the form of patterns (e.g., Craig Larman’s GRASP patterns), rules of thumb (e.g., “minimize coupling, maximize cohesion”, “keep inheritance depth shallow”), and exemplar models to better understand good modeling practices. While this help, the reality is that students ultimately rely on feedback from their instructors to determine the quality of their models. The instructors play the role of expert modelers and the students are their apprentices. The state of the practice in assessing model quality in the classroom and in industry seems to indicate that modeling is still in the craftsmanship phase.
Research on rigorous assessment of model quality has given us a glimpse of how we can progress to the next phase in which models are engineered. A number of researchers are working on developing rigorous static analysis techniques that are based on well-defined models of behavior. Articles on model-checking of modeled behavior published in SoSyM are a good reflection of the work in this area. Another promising area of research is systematic model testing (i.e., systematic dynamic analysis of modeled behavior). Systematic dynamic analysis of code (i.e., code testing) involves executing programs on a selected set of test inputs that satisfy some test criteria. These ideas can be extended to the modeling phases when models with operational semantics are used. Most educators in the modeling community have heard students gripe about their inability to animate or execute the models they have created in order to explore the behavior they have modeled. Model testing is concerned with providing modelers with this ability. Systematic model testing techniques provide opportunities for automating the testing process and for reusing tests. Systematic regression testing techniques in particular can enable more rigorous model evolution. The notion of model testing is not new. For example, SDL (Specification and Description Language) tools of provide facilities for exercising the state-machine based SDL models using an input set of test events. Work on executable variants of the UML also aims to provide modelers with feedback on the adequacy of their models. More recently a small, but growing, number of researchers have begun looking at developing systematic model testing techniques. This is an important area of research and helps pave the way towards more effective use of models during software development. There are a number of lessons from the systematic code testing community that can be applied, but the peculiarities of modeling languages also requires the development of new and innovative approaches. In particular, innovative work on defining effective test criteria that are based on coverage of model elements and on the generation of model-level test cases that provide desired levels of coverage is needed.
It is also useful to look at how other engineering disciplines determine the quality of their models. Engineers in other disciplines typically explore answers to the following questions when determining the adequacy of their models: Is the model a good predictor of how the physical artifact will behave? What are the (simplifying) assumptions underlying the model and what impact will they have on actual behavior? The answer to the first question is often based on evidence gathered from past applications of the model. Evidence of model fidelity is built up by comparing the actual behavior of systems built using the models with the behavior predicted by the models. Each time engineers build a system the experience gained either reinforces their confidence in the predictive power of the models used or the experience is used to improve the predictive power of models. Answers to the second question allow engineers to identify the limitations of analyses carried out using the models and develop plans for identifying and addressing problems that arise when the assumptions are violated. Are similar questions applicable to software models? There are important differences between physical and software artifacts that one needs to consider when applying practices in other engineering disciplines to software, but there probably also exists some experience that can be beneficially applied to software modeling.
We can be sure that static analysis through context condition checking in various forms and dynamic checking through different kinds of testing strategies will be important parts of the newly emerging model engineering discipline.
This essay is essentially taken from a SoSyM editorial, which is published under the Creative Commons licence and contains some updates:
-
[FR04b]In: Journal Software and Systems Modeling (SoSyM), Volume 3(3), pp. 179-180, Springer Berlin / Heidelberg, 2004.