An Explainable Machine Learning Approach to Automated Analysis of Phased Array Inspection Data
Tracks
NDT UNLOCKED
Knowledge Level - NDT Level I/NDT Level II
Knowledge Level - NDT Level III
Knowledge Level - Student
Presentation Topic Level - Intermediate
Target Audience - General Interest
Target Audience - Level III Managers
Target Audience - Research/Academics
Target Audience- NDT Engineers
Target Audience- Technicians/Inspectors
Thursday, October 9, 2025 |
9:00 AM - 9:30 AM |
Speaker
Jonathan Lesage
Application Development Engineer
Acuren
An Explainable Machine Learning Approach to Automated Analysis of Phased Array Inspection Data
Presentation Description
Recent advances in deep learning associated with the development of the transformer architecture have lead to unprecedented breakthroughs in the fields of natural language processing and computer vision. As a result of these innovations, a kind of frenzied enthusiasm to adapt sophisticated deep learning methods to automate all aspects of manual human workflows has taken hold in many industries, including Nondestructive Testing (NDT). Unfortunately, direct application of large transformer or convolutional neural network based models to the problem of automated analysis of NDT data is complicated by several factors:
1. Labeled training data is relatively scarce in NDT
2. Fine tuning large computer vision models is not viable for most inspection modalities (other than visual testing)
3. The process of analyzing inspection data is often rigidly defined by codes and standards
4. Understanding of analysis decisions is required due to the critical nature of decisions taken based on inspection findings: decisions should be explainable to and auditable by Level 3 human analysts
In the case of Phased Array Ultrasonic Testing (PAUT) in particular, significant differences in the dimensionality of data data exist (signal range & number of beams --> image dimensions) purely due to the physical dimensions of the component to be inspected and the relevance of scan features requires an awareness of the potential sources of benign reflections (weld cap/root, back-wall, etc.). In addition, the choice of essential technique parameters (probe frequency/aperture, wedge dimensions, focusing options, etc.) significantly affect the PAUT image quality (signal to noise, resolution). Taken together, these sources of variation preclude the training of a single deep learning model where the PAUT signals (or images rendered therefrom) are fed directly as features.
Fortunately, rules for detection, sizing and flaw acceptance are often strictly defined by codes and standards (ISO, AWS, ASME) governing the interpretation of ultrasonic inspection data. With few exceptions, these rules can be implemented automatically by software to identify a list of potentially relevant indications in a PAUT scan. The relevance of each indication in this auto-generated list must then be confirmed, one by one, by a human analyst. Depending on the inspection geometry and PAUT technique, the list of indications requiring human review may be polluted with hundreds of geometric reflections which can be time consuming to exhaustively investigate/potentially disqualify.
To avoid the difficulties associated with training a deep learning model and ensure consistency with applicable codes and standards we propose an alternative approach to automate the analysis of PAUT data whereby code mandated rules for detecting, sizing, grouping and classifying indications are implemented algorithmically. From this list of indications, features often considered by human analysts to assess indication relevance (calibrated amplitude, position, shape, size, etc.) are extracted and used to train simple candidate machine learning algorithms to automatically determine indication relevance. A large volume of training data in the form of indications labelled as relevant or non-relevant by certified inspectors was readily generated through routine analysis of PAUT weld inspection data. Representing indication position relative to the weld bevel, scaling indication amplitudes with respect to proximate indications and ignoring indications which reflect from the nominal weld cap/root were all found to make the model's predictions more robust against changes to the inspection geometry and technique. Augmentation of the training set - required to balance the proportion of relevant to non-relevant indications - was accomplished by generating indications to supplement the training set, each having features which would unquestionably be considered relevant by human analysts e.g. high amplitude indications appearing along the nominal weld bevel.
The primary advantage of the proposed method of automating the analysis of PAUT inspection data is that all detection decisions are made in accordance with codes and standards - only the disqualification of irrelevant reflections (geometric indications) is performed by a machine learning algorithm. The features included in the model are relatively few and were chosen to capture information used by human analysts to determine whether indications are to be reported; accordingly, the model's decisions are inherently more explainable to the human analyst. The number of training examples needed to achieve human level performance is considerably less than would be required to train a deep learning model owing to the low dimensional feature set, while high level nature of the features allows examples of critical flaws to be easily simulated for augmentation purposes. The effectiveness of the proposed automated inspection technique is demonstrated on field data collected as part of periodic inspection of wind turbine welds and is shown to yield results comparable to those obtained by certified inspection personnel.
1. Labeled training data is relatively scarce in NDT
2. Fine tuning large computer vision models is not viable for most inspection modalities (other than visual testing)
3. The process of analyzing inspection data is often rigidly defined by codes and standards
4. Understanding of analysis decisions is required due to the critical nature of decisions taken based on inspection findings: decisions should be explainable to and auditable by Level 3 human analysts
In the case of Phased Array Ultrasonic Testing (PAUT) in particular, significant differences in the dimensionality of data data exist (signal range & number of beams --> image dimensions) purely due to the physical dimensions of the component to be inspected and the relevance of scan features requires an awareness of the potential sources of benign reflections (weld cap/root, back-wall, etc.). In addition, the choice of essential technique parameters (probe frequency/aperture, wedge dimensions, focusing options, etc.) significantly affect the PAUT image quality (signal to noise, resolution). Taken together, these sources of variation preclude the training of a single deep learning model where the PAUT signals (or images rendered therefrom) are fed directly as features.
Fortunately, rules for detection, sizing and flaw acceptance are often strictly defined by codes and standards (ISO, AWS, ASME) governing the interpretation of ultrasonic inspection data. With few exceptions, these rules can be implemented automatically by software to identify a list of potentially relevant indications in a PAUT scan. The relevance of each indication in this auto-generated list must then be confirmed, one by one, by a human analyst. Depending on the inspection geometry and PAUT technique, the list of indications requiring human review may be polluted with hundreds of geometric reflections which can be time consuming to exhaustively investigate/potentially disqualify.
To avoid the difficulties associated with training a deep learning model and ensure consistency with applicable codes and standards we propose an alternative approach to automate the analysis of PAUT data whereby code mandated rules for detecting, sizing, grouping and classifying indications are implemented algorithmically. From this list of indications, features often considered by human analysts to assess indication relevance (calibrated amplitude, position, shape, size, etc.) are extracted and used to train simple candidate machine learning algorithms to automatically determine indication relevance. A large volume of training data in the form of indications labelled as relevant or non-relevant by certified inspectors was readily generated through routine analysis of PAUT weld inspection data. Representing indication position relative to the weld bevel, scaling indication amplitudes with respect to proximate indications and ignoring indications which reflect from the nominal weld cap/root were all found to make the model's predictions more robust against changes to the inspection geometry and technique. Augmentation of the training set - required to balance the proportion of relevant to non-relevant indications - was accomplished by generating indications to supplement the training set, each having features which would unquestionably be considered relevant by human analysts e.g. high amplitude indications appearing along the nominal weld bevel.
The primary advantage of the proposed method of automating the analysis of PAUT inspection data is that all detection decisions are made in accordance with codes and standards - only the disqualification of irrelevant reflections (geometric indications) is performed by a machine learning algorithm. The features included in the model are relatively few and were chosen to capture information used by human analysts to determine whether indications are to be reported; accordingly, the model's decisions are inherently more explainable to the human analyst. The number of training examples needed to achieve human level performance is considerably less than would be required to train a deep learning model owing to the low dimensional feature set, while high level nature of the features allows examples of critical flaws to be easily simulated for augmentation purposes. The effectiveness of the proposed automated inspection technique is demonstrated on field data collected as part of periodic inspection of wind turbine welds and is shown to yield results comparable to those obtained by certified inspection personnel.
Short Course Description
Biography
Jonathan is an NDT Applications Engineer with Acuren's Applications Development Group. He began his career at Eclipse Scientific (since acquired by Acuren) as a signal/image processing specialist in 2016. Since that time, he has worked on novel techniques for processing Nondestructive Testing data. He holds a Ph.D. from the University of Toronto and a CSWIP Phased Array Level II certification. Jonathan works closely with his colleague Mohammad Marvasti to provide support to Acuren's Advanced Services divisions as well as consultation services for external clients. Jonathan also contributes to the development of new features in the BeamTool ultrasonic technique design software.
