An Empirical Study of the Impact of Bad Designs on Defect Proneness
- 1 November 2017
- conference paper
- conference paper
- Published by Institute of Electrical and Electronics Engineers (IEEE) in 2017 International Conference on Software Analysis, Testing and Evolution (SATE)
Abstract
To reduce loss from software defects, in the past decades, a number of software engineering researchers have proposed many software defect prediction techniques, which mainly focus on predicting the defect prone software modules, source code files, or code changes. Prior research have identified software design has significant impacts on software quality, especially the bad designs, e.g., anti-patterns, high dependency design, and large source code files, have made various software engineering tasks more difficult. Given these prior works, various bad designs indicators have been widely considered as the fundamental defect prediction metrics in various defect prediction models. Even though the performance of these techniques have been investigated empirically, researchers have not yet gained a clear understanding of correlation between these design metrics and defects proneness. To bridge this gap, in this paper, we investigate the impact of the three kinds of bad design indicators on software defect proneness by conducting a comprehensive empirical study on 18 release versions of the Apache Commons series. In details, we discuss the defect proneness on the file level of three kinds of bad designs, corresponding to seven defect proneness metrics, including various types of well defined code smells, high method dependency, and the files of large size. Furthermore, we investigate the performance of each defect proneness metrics and the overlap between the file sets involved in the bad designs. The experiment results indicate that the three types of bad designs do have impact on defect proneness, the files participating in some special code smell, the large number of code calls to other modules and the large number of lines of code are significantly more likely to be faulty. Moreover, the overlaps of three types of bad designs are relatively small, which means that each group of defect proneness metrics is independent of each other.Keywords
This publication has 17 references indexed in Scilit:
- Are Fix-Inducing Changes a Moving Target? A Longitudinal Case Study of Just-In-Time Defect PredictionIEEE Transactions on Software Engineering, 2017
- When and Why Your Code Starts to Smell BadPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2015
- Some Code Smells Have a Significant but Small Effect on FaultsACM Transactions on Software Engineering and Methodology, 2014
- How, and why, process metrics are betterPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2013
- DECOR: A Method for the Specification and Detection of Code and Design SmellsIEEE Transactions on Software Engineering, 2009
- Software Dependencies, Work Dependencies, and Their Impact on FailuresIEEE Transactions on Software Engineering, 2009
- Predicting defects using network analysis on dependency graphsPublished by Association for Computing Machinery (ACM) ,2008
- EQ-Mine: Predicting Short-Term Defects for Software EvolutionPublished by Springer Science and Business Media LLC ,2007
- Empirical Analysis of Object-Oriented Design Metrics for Predicting High and Low Severity FaultsIEEE Transactions on Software Engineering, 2006
- Where the bugs areACM SIGSOFT Software Engineering Notes, 2004