Preprints‎ > ‎

Empirical Principles and an Industrial Case Study in Retrieving Equivalent Requirements via Natural Language Processing Techniques by D. Falessi, G. Cantone and G. Canfora

pubblicato 10 mar 2012, 01:03 da Gerardo Canfora   [ aggiornato in data 5 gen 2015, 13:52 ]
Though very important in software engineering, linking artifacts of the same type (clone detection) or different types (traceability recovery) is extremely tedious, error-prone, and effort-intensive. Past research focused on supporting analysts with techniques based on Natural Language Processing (NLP) to identify candidate links. Because many NLP techniques exist and their performance varies according to context, it is crucial to define and use reliable evaluation procedures. The aim of this paper is to propose a set of seven principles for evaluating the performance of NLP techniques in identifying equivalent requirements. In this paper we conjecture, and verify, that NLP techniques perform on a given dataset according to both ability and the odds of identifying equivalent requirements correctly. For instance, when the odds of identifying equivalent requirements are very high, then it is reasonable to expect that NLP techniques will result in good performance. Our key idea is to measure this random factor of the specific dataset(s) in use and then adjust the observed performance accordingly. To support the application of the principles we report their practical application to a case study that evaluates the performance of a large number of NLP techniques for identifying equivalent requirements in the context of an Italian company in the defense and aerospace domain. The current application context is the evaluation of NLP techniques to identify equivalent requirements. However, most of the proposed principles seem applicable to evaluate any estimation technique aimed at supporting a binary decision (e.g. equivalent/nonequivalent), with the estimate in the range [0,1] (e.g. the similarity provided by the NLP), when the dataset(s) is used as a benchmark (i.e. test bed), independently of the type of estimator (i.e. requirements text) and of the estimation method (e.g. NLP).
IEEE Trans. Software Eng. 39(1): 18-44 (2013)
Ċ
Gerardo Canfora,
10 mar 2012, 01:03
Comments