Akademska digitalna zbirka SLovenije - logo
E-viri
Celotno besedilo
Recenzirano
  • Real-time anomaly detection...
    Cho, Hae-Won; Shin, Seung-Jun; Seo, Gi-Jeong; Kim, Duck Bong; Lee, Dong-Hee

    Journal of materials processing technology, April 2022, 2022-04-00, 20220401, Letnik: 302
    Journal Article

    Display omitted •A real-time anomaly detection method based on a convolutional neural network in wire arc additive manufacturing is presented.•A convolutional neural network-based model is created to detect balling and bead-cut defects with 98 % classification accuracy and 0.033 s/frame processing time.•Experiments are conducted using molybdenum material.•A prototype system is implemented to classify the current image data into normal and abnormal states. Wire arc additive manufacturing (WAAM) has received attention because of its high deposition rate, low cost, and high material utilization. However, quality issues are critical in WAAM because it builds upon arc welding technology, which can result in low precision and poor quality of the melted parts. Hence, anomaly detection is essential for identifying abnormal behaviors and process instability during WAAM to reduce the time and cost of post-process treatment. The relevant studies have been conducted on anomaly detection algorithms using machine learning in fused deposition modeling and laser powder bed fusion; however, they have less investigated the implementation for in situ quality monitoring in WAAM. This work presents a real-time anomaly detection method that uses a convolutional neural network (CNN) in WAAM. The proposed method enables creation of CNN-based models that detect abnormalities by learning from the melt pool image data, which are pre-processed to increase learning performance. A prototype system was implemented to classify melt pool images into “normal” and “abnormal” states, with the latter accounting for balling and bead-cut defects. Experiments were conducted using molybdenum, a cost-intensive and hard-to-machine material. Four CNN-based models were created using MobileNetV2, DenseNet169, Resnet50V2, and InceptionResNetV2. Then, their performances were validated in terms of classification accuracy and processing time. The MobileNetV2 model yielded the best performance with 98 % of classification accuracy and 0.033 s/frame of processing time. This model was also compared with an object detection algorithm named “YOLO”, which yielded 73.5 % of classification accuracy and 0.067 s/frame of processing time.