UNI-MB - logo
UMNIK - logo
 
E-viri
Celotno besedilo
Odprti dostop
  • Bhandari, Jitendra; Knechtel, Johann; Narayanaswamy, Ramesh; Garg, Siddharth; Karri, Ramesh

    arXiv (Cornell University), 06/2024
    Paper, Journal Article

    This work investigates the potential of tailoring Large Language Models (LLMs), specifically GPT3.5 and GPT4, for the domain of chip testing. A key aspect of chip design is functional testing, which relies on testbenches to evaluate the functionality and coverage of Register-Transfer Level (RTL) designs. We aim to enhance testbench generation by incorporating feedback from commercial-grade Electronic Design Automation (EDA) tools into LLMs. Through iterative feedback from these tools, we refine the testbenches to achieve improved test coverage. Our case studies present promising results, demonstrating that this approach can effectively enhance test coverage. By integrating EDA tool feedback, the generated testbenches become more accurate in identifying potential issues in the RTL design. Furthermore, we extended our study to use this enhanced test coverage framework for detecting bugs in the RTL implementations