keyboard_arrow_up
Evaluating Prompt-Learning-Based API Review Classification Through Pre-Trained Models

Authors

Xia Li and Allen Kim, Kennesaw State University, USA

Abstract

To improve the work efficiency and code quality of modern software development, users always reuse Application Programming Interfaces (APIs) provided by third-party libraries and frameworks rather than implementing from scratch. However, due to time constraints in soft- ware development, API developers often refrain from providing detailed explanations or usage instructions for APIs, resulting in confusion for users. It is important to categorize API reviews into different groups for easily usage. In this paper, we conduct a comprehensive study to evaluate the effectiveness of prompt-based API review classification based on various pre-trained models such as BERT, RoBERTa, BERTOverflow. Our experimental results show that prompts with com- plete context can achieve best effectiveness and the model RoBERTa outperforms other two models due to the size of training corpus. We also utilize the widely-used fine-tuning approach LoRA to evaluate that the training overhead can be significantly reduced (e.g., 50% reduction) without the loss of the effectiveness of classification.

Keywords

Software engineering, API review classification, pre-trained models, fine-tuning.