UNI-MB - logo
UMNIK - logo
 
E-resources
Full text
Peer reviewed Open access
  • Automating the correctness ...
    Cotroneo, Domenico; Foggia, Alessio; Improta, Cristina; Liguori, Pietro; Natella, Roberto

    The Journal of systems and software, 10/2024, Volume: 216
    Journal Article

    Evaluating the correctness of code generated by AI is a challenging open problem. In this paper, we propose a fully automated method, named ACCA, to evaluate the correctness of AI-generated code for security purposes. The method uses symbolic execution to assess whether the AI-generated code behaves as a reference implementation. We use ACCA to assess four state-of-the-art models trained to generate security-oriented assembly code and compare the results of the evaluation with different baseline solutions, including output similarity metrics, widely used in the field, and the well-known ChatGPT, the AI-powered language model developed by OpenAI. Our experiments show that our method outperforms the baseline solutions and assesses the correctness of the AI-generated code similar to the human-based evaluation, which is considered the ground truth for the assessment in the field. Moreover, ACCA has a very strong correlation with the human evaluation (Pearson’s correlation coefficient r=0.84 on average). Finally, since it is a full y automated solution that does not require any human intervention, the proposed method performs the assessment of every code snippet in ∼0.17 s on average, which is definitely lower than the average time required by human analysts to manually inspect the code, based on our experience. •ACCA aligns with human evaluation for code correctness in the 93% of cases.•Code correctness computed by ACCA is the closest to the human evaluation.•ACCA is the most correlated to the human evaluation over all the predictions.•Computational time required by ACCA are lower than human evaluation, on average.