Abstract:
Over the last few years, researchers proposed a multitude of automated bug-detection approaches that mine a class of bugs that we call API misuses. Evaluations on a variety of software products show both the omnipresence of such misuses and the ability of the approaches to detect them. This work presents MuBench, a dataset of 89 API misuses that we collected from 33 real-world projects and a survey. With the dataset we empirically analyze the prevalence of API misuses compared to other types of bugs, finding that they are rare, but almost always cause crashes. Furthermore, we discuss how to use it to benchmark and compare API-misuse detectors.
Resources
- Download preprint
- Visit publisher page (via DOI)
- Visit artifact page
- See slides of the talk
- See slide images
BibTeX
@inproceedings {ANNNM16, title = {{MUBench: A Benchmark for API-Misuse Detectors}}, author = {Amann, Sven and Nadi, Sarah and Nguyen, Hoan Anh and Nguyen, Tien N. and Mezini, Mira}, booktitle = {{Proceedings of the 13th International Conference on Mining Software Repositories}}, series = {MSR 2016}, year = {2016}, doi = {10.1145/2901739.2903506}, url = {http://dx.doi.org/10.1145/2901739.2903506}, }