Download as pdf or txt
Download as pdf or txt
You are on page 1of 8

Ultramicroscopy 247 (2023) 113703

Contents lists available at ScienceDirect

Ultramicroscopy
journal homepage: www.elsevier.com/locate/ultramic

Machine learning based de-noising of electron back scatter patterns of


various crystallographic metallic materials fabricated using laser directed
energy deposition
K. V. Mani Krishna a, c, R. Madhavan a, b, Mangesh V. Pantawane a, b, Rajarshi Banerjee a, b,
Narendra B. Dahotre a, b, *
a
Center for Agile and Adaptive Additive Manufacturing, University of North Texas, 3940 N Elm St, Denton, TX 76207, USA
b
Department of Materials Science and Engineering, University of North Texas, Denton, TX 76207, USA
c
Mechanical Metallurgy Division, Bhabha Atomic Research Centre, Mumbai, 400 085, India

A R T I C L E I N F O A B S T R A C T

Keywords: A novel machine learning (ML) method of refining noisy Electron Back Scatter Patterns (EBSP) is proposed. For
EBSD this, conditional generative adversarial networks (c-GAN) have been employed. The problem of de-noising the
Electron microscopy EBSPs was formulated as an image translation task conditioned on the input images to get refined/denoised
Machine learning
output of EBSPs which can be indexed using conventional Hough transform based indexing algorithms. The ML
Image processing
model was trained using 10,000 EBSPs acquired under different settings for additively manufactured FCC, BCC
and HCP alloy samples ensuring enough diversity and complexity in training data set. Pairs of noisy and cor­
responding optimal EBSPs were acquired by suitable tweaking of the EBSP acquisition parameters such as beam
defocus, pattern binning and EBSD camera exposure duration. The trained model has brought out significant
improvement in EBSD indexing success rate on test data, accompanied by betterment of indexing accuracy,
quantified through ‘pattern fit’. Complete automation of the EBSP refinement was demonstrated where in entire
EBSD scan data can be fed to the model to get the refined EBSPs from which high quality EBSD data can be
obtained.

1. Introduction quality (signal to noise ratio). Typical aim of the sample preparation and
microscope operation conditions are to maximize the quality of the
Electron back scatter diffraction (EBSD) is a powerful characteriza­ EBSPs to enable accurate band localization which results in successful
tion technique that enables microstructural characterization and quan­ indexing of the EBSP [1]. Unfortunately, in many scenarios, despite best
tification through determination of crystallographic orientation of of efforts, the quality of the EBSPs remains poor and existing EBSP
underlying crystals within the region of interest [1]. This orientation processing techniques cannot localize Kikuchi bands leading to poor
measurement forms the basis for the entire gamut of information EBSD hit rates [3]. These scenarios include samples with high degree of
extracted from EBSD analysis. The orientation determination, and hence cold work, dynamic experiments where EBSP acquisition time (camera
entire EBSD analysis, depends on the successful indexing of the raw exposure time) has to be kept very small and uneven surface of the
Electron Back Scatter Patterns (EBSPs) acquired from the sample. samples (faceted samples). In such situations where information in the
Typically, indexing the EBSPs consist of (a) localizing the Kikuchi bands raw EBSPs is masked by undesirable noise and artefacts, machine
(band detection) and (b) solving for the crystal orientation given the learning (ML) models can be of great value in improving the quality or
location of the Kikuchi bands and other experimental conditions and cancelling out the noise in the EBSPs. If the quality of the EBSPs can be
crystallographic information of the diffracting phases. Conventionally, improved by ML, such difficult to index samples/scenarios will become
step (a) is achieved by Hough transformation which has been proved to accessible for better characterization and newer insights. For the cases of
be effective in Kikuchi band detection [1,2]. However, success of Hough EBSPs where step (a) is rendered difficult on account of poor quality of
transform in determination of the Kikuchi bands depends on the EBSP the EBSPs, different approaches such as dictionary indexing (DI) have

* Corresponding author.
E-mail address: narendra.dahotre@unt.edu (N.B. Dahotre).

https://doi.org/10.1016/j.ultramic.2023.113703
Received 1 December 2022; Received in revised form 4 February 2023; Accepted 12 February 2023
Available online 19 February 2023
0304-3991/© 2023 Elsevier B.V. All rights reserved.
K.V.M. Krishna et al. Ultramicroscopy 247 (2023) 113703

been proposed and applied in recent times [3–5]. This approach,


although very effective, requires offline storing and large computational
times (as each raw EBSP needs to be compared with entire dictionary of
the simulated EBSPs). This has prompted the development of hybrid
algorithms which take the advantage of pattern recognition capabilities
of the ML models and noise handling capabilities of the DI methods [6,
7]. These models/methods again are developed keeping entire pipeline
of EBSP processing to orientation solution in mind and need to be
trained for each different crystallographic phase under question. How­
ever, it may be emphasized that the problem of indexing in majority of
the cases boils down to accurate localization/detection of Kikuchi bands
as the methods of solving for the crystal orientations from identified
Kikuchi locations are rather straightforward and robust [2,8]. Hence in Fig. 1. Illustration of problem statement. The poor EBSPs (marked A) in the
the current work, we address the core issue of de-noising the poor above figure need to be translated into equivalent good quality patterns
quality EBSPs to enable accurate localization of Kikuchi bands through (marked B). Thus, it is known as image translation from A->B.
ML driven noise reduction and feature enhancement and leave the
orientation solution to already available robust algorithms of indexing 2. The model
[2,8].
Recent advances in the ML methods, especially deep learning models The machine learning model employed for this task was the Pix2Pix
[9–11] have brought in paradigm shift in the image processing methods. architecture which is a conditional GAN (Generative Adversarial
Some of the challenging tasks for conventional algorithm/methods have Network) based model [23]. There exists an enormous open literature on
now been rendered trivial for the ML models and deployed on large scale the principles and practices of GAN systems and only a brief summary
already [12]. These include image segmentation [13–15], style transfer relevant to current work is provided here.
[16], generation of super resolution images [17–20] etc. The current Generative Adversarial Networks, or GANs for short, are an approach
issue of de-noising the EBSPs can benefit from the application of the to generative modelling using deep learning methods, such as con­
existing ML models that are hugely successful in image translation tasks. volutional neural networks. Generative modelling is an unsupervised
Image translation is a generic term that describes the task of converting a learning task in machine learning that involves automatically discov­
given image in to a ‘related’ target image with the objective of fulfilling ering and learning the regularities or patterns in input data in such a way
certain requirements [21]. In the present case, the ‘requirement’ is to that the model can be used to generate or output new examples that
reduce the noise and enhance the useful features (Kikuchi lines/bands) plausibly could have been drawn from the original dataset. GANs train a
of the image. Current work is an attempt to employ a hugely successful generative model by framing the problem as a supervised learning one
pix2pix model (a conditional generative adversary network (c-GAN) with two sub-models: the ‘Generator model’ that we train to generate
machine learning model, [22,23]) to the refinement or de-noising of new examples, and the ‘Discriminator model’ that tries to classify ex­
EBSPs. While an impressive array of applications (mostly for artistic or amples as either real (from the domain) or fake (generated). The two
entertainment, see the references 2, 3, 5, 6, 7 and 8 of [23]) have been models are trained together in a zero-sum game, adversarial, until the
developed using image translation on natural images, this powerful discriminator model is fooled about half the time, meaning the generator
technique has not been exploited in the scientific field for aiding the model is generating plausible examples. Fig. 2 depicts the architecture of
materials characterization as of now. Current work is an attempt to the GAN model.
address this issue and demonstrate the ability of the ML in enabling An open-source python based TensorFlow [24] implementation of
successful characterization of otherwise difficult to characterize mate­ pix2pix c-GAN was employed in this work. Model consists of a Generator
rials/samples. In the rest of the manuscript the details of the model (G) network which is of U-net architecture having skip connections
employed, and results obtained are described. The efficacy of the tech­ between encoder and decoder networks and Discriminator (D) which
nique is demonstrated by applying it to a data set on which conventional uses PatchGAN to classify the input images into fake or real. The indi­
methods of EBSP indexing did not work satisfactorily. vidual layers of these networks are essentially standard convolution-
Batch normalization-ReLU blocks of layers, typical of deep convolu­
1.1. Problem statement tional neural networks, see Fig. 2b and c. All convolutions are 4× 4
spatial filters applied with stride 2. Convolutions in the encoder, and in
Mathematically a machine learning based image translation problem the discriminator, down sample by a factor of 2, whereas in the decoder
can be described as a task of learning a mapping function G that trans­ they up sample by a factor of 2 [23].
forms observed image x to predicted image y using some random noise The loss function employed for this work is given below:
vector z, expressed as:
G∗ = arg minmaxλcGAN L cGAN (G, D) + λL1 L L1 (G) + λL2 L L2 (G) (2)
G : {x, z}→y (1) G D

The role of z in above expression is to be able to avoid deterministic Where, L L1 (G), L L2 (G) and L cGAN (G, D) are L1, L2 and cGAN losses
outputs (critical in applications where a representative distribution of of the generator, λL1 , λL2 and λcGAN are their respective weights. L cGAN (G,
images needs to be generated from a given single x). However, for the D) is defined as,
present problem, it is not essential or even desirable to get non- deter­ L cGAN (G, D) = Ey [logD(y)] + Ex,z [log(1 − D(G(x, z))] (3)
ministic outputs for a given input. In the current context, Eq. (1)
translates to finding a mapping function (G) that ‘translates’ poor quality Unfortunately, assessing the performance of generative networks is
EBSP (x) which is not amenable for automated indexing by existing not trivial and typical metrics such as Mean Square Error (MSE), Mean
EBSD indexing methods, to an equivalent high quality (y) and/or reso­ Intersection over Union (MIOU) etc., are not applicable/suitable in the
lution pattern which is suitable for the indexing by existing algorithms current context. We have applied structural similarity index, as defined
to enable accurate determination of crystallographic orientation. Fig. 1 in Eq. (4) [25], between the ground truth EBSP and precited EBSPs by
demonstrates the task being attempted in a pictorial manner. The target ML as the measure of the accuracy/performance of the model.
of the present work is to convert such poor quality EBSP patterns into
better and indexable patterns without introducing the artefacts.

2
K.V.M. Krishna et al. Ultramicroscopy 247 (2023) 113703

Fig. 2. (a) Schematic illustrating the working of Pix2Pix architecture for the application of de-noising of low quality EBSP to corresponding high fidelity EBSPs.
Schematic architecture of Generator (b) and Discriminator (C) are also shown.

( )( )
2μx μy + C1 2σxy + C2 2.1. Data set collection and preparation
SSIM(x, y) = ( )( ) (4)
μ2x + μ2y + C1 σ2x + σ 2y + C2
The importance of training data set in determining the success of the
Where μx, μy are the averages of x and y, σ σ the variances of x and
2 2 ML model can hardly be overemphasized. In the current context, the
x, y
problem of refinement of EBSPs through ML approach presents unique
y, σxy the covariance of x and y, C1 , C2 are stabilizing parameters. As the
challenges and opportunities in terms of data collection and preparation.
structural similarity (SSIM) index is more widely used in the image
It may be noted that previous works employing ML for the EBSP pro­
processing, recognition, and segmentation tasks, it is adopted as the
cessing have used simulated EBSP (from forward modelling) for training
metric in present work. Further, the definition of the SSIM index ensures
the models from corresponding quaternion descriptions of the orienta­
its value to be between -1 to 1 irrespective of image bit depth, and color
tions [6]. In the current work, however, we employ pairs of experi­
space. An alternative and computationally efficient metric suitable for
mental high fidelity EBSPs (ground truth) and corresponding poor or
the current task could be image inner-product [26]. However, normal­
noisy EBSPs (acquired using non optimal EBSP acquisition parameters)
ization of the inner-product score for inter image set comparison may
as the training examples as per the procedure detailed below.
not be straight forward and is sensitive to image bit depth. Hence SSIM
A number of EBSD scans were performed with different EBSP
index was chosen as the preferred metric in present work for the
acquisition settings at each of the given region of interest (ROI) at
assessment of the model accuracy. Nonetheless, the overall results of the
identical step size (5 μm). These include varying beam (focus, stigma­
current work are likely to remain similar even if we employ inner
tion, etc.) and detector settings (binning, exposure time, gain, etc.).
product as the accuracy metric. Mean SSIM over the entire test data set is
Extra care was taken to minimise the sample drift during the acquisition
evaluated after every training epoch and training is stopped once there is
and avoid any undesirable stage movement. As shown in Fig. 3, for each
no improvement in the mean SSIM score on the test data set. The model
ROI, EBSPs were acquired with optimal acquisition settings (to act as
was trained on system with NVDIA RTX 6000 GPU with 10752 CUDA
ground truth images or ‘Reference EBSPs’, PR(i,j) ) followed by a series of
cores and 48GB memory. Typical time required for 1 epoch in the pre­
sent case was around 400 sec and models were trained till 50 to 80 intermediate scans with distorted beam and EBSD camera parameters
epochs. (non-optimal acquisition parameters) to acquire corresponding noisy/

3
K.V.M. Krishna et al. Ultramicroscopy 247 (2023) 113703

Fig. 3. Illustration of the variation in the EBSPs and corresponding EBSD maps collected from same sample location but with different beam and EBSD camera
settings. The EBSPs corresponding to a specific (arbitrarily chosen) point are shown above each EBSD map for illustration. The pair ((PI(i,j)
k
) : Pr(i,j) ) forms the training
example. Actual acquisition conditions of EBSPs and their quality metric: Average CI is indicated below each map as an image caption.

poor quality EBSPs (PI(i,j)


k
, with k = 1,2,3..). At the end of such scans, one recorded for preparing the training, testing and validation data sets. For
more cross-reference scan was acquired with optimal acquisition set­ each of the ROI, the total time of acquisition (including reference, cross-
reference and intermediate distorted/noisy scans) was less than 1 h.
tings (PR(i,j) ). The point-to-point comparison of cross- reference scan

Thus, in few hours of acquisitions time, one can acquire several thou­
(PR(i,j) ) data with the initial reference scan (PR(i,j) ) for each ROI, served as

sands of training examples (pairs of noisy and ground truth EBSPs)


the measure of extent of sample drift, if any, and validity of the data encompassing significant diversity (in orientations of the grains
points of intermediate noisy EBSPs for use as training examples. covered), and complexity (samples with different crystal structures). In
Several EBSD scans of 500 μm X 500 μm with a step size of 5 μm were many of the ML works collecting such huge data and preparing the

4
K.V.M. Krishna et al. Ultramicroscopy 247 (2023) 113703

ground truth annotations is one of the most arduous and labour- 3. Results and discussion
intensive tasks. De-noising the EBSPs through ML thus presents a
unique case where voluminous data collection and ground truth anno­ For this study, several raw EBSPs of different resolutions (image
tation is rather easy. It may be noted that EBSPs of reference scans serves binning ranging from 1×1, 2×2, 4×4, 6×6, and 8×8) have been ac­
as ground truth without any manual annotation involved. The fact that quired using the EDAX- velocity detector mounted on FEI- Apreo2 FE-
automated EBSD collections are routine practices and well standardized SEM. EBSPs have been acquired from Ni-Co-Fe high entropy alloy
helps in the collection of large data sets in rather short times. While these (FCC), Tungsten (BCC) and Ti-6V-4Al (HCP+BCC) samples additively
points emphasize the unique opportunities present in the current task of manufactured using laser based directed energy deposition technique.
de-noising the EBSPs, a set of new challenges emerge that are unique to The choice of additively manufactured samples was prompted by the
this problem as described below. fact that inherently complex thermos-kinetics associated with the mul­
Generating a high quality EBSP (PR(i,j) ) for every noisy EBSP (PI(i,j)
k
) to tiple steep thermal cycles due to multi track and multilayer deposition
prepare the training data set is a challenge in itself and is not practical in tend to generate residual stresses in the grains there by causing minor in-
many situations. As stated previously, in many scenarios, we simply grain misorientations and corresponding variation in the EBSPs around a
can’t get such high quality EBSPs. These include scenarios such as but mean orientation. This can help in increased diversification of the
not limited to samples with poor surface condition on account of dif­ training data set and help model to learn the features of interest (Kikuchi
ferential polishing of constituent phases, inherent surface relief due to bands) and ignore irrelevant features such as noise, shadows, uneven
phase transformations such as in martensitic transformation, highly illumination, etc. A total of 10,000 training examples and 500 each for
deformed samples with significant grain fragmentations and dynamic training and validation data sets were employed in the current work.
experiments inside SEM where acquisition times must be extremely low Fig. 4 shows the gradual improvement in the prediction or de-noising
to capture the dynamics of the evolving microstructures, and beam- ability of the c-GAN model with the progress of training. As can be
sensitive materials that are likely to be damaged during repetitive ex­ observed, depending on the quality of the input EBSP, the model could
posures to the electron beam while collecting multiple patterns from the predict corresponding correct noise free EBSP within 50 epochs of
same area [27]. Typically, most of such conditions result in poorer training in majority of the cases.
EBSPs and it is not practical to acquire corresponding high quality In few cases when the quality of the input EBSP is too poor, model
ground truth EBSPs. However, the noise introduced by the inherent had failed to predict any usable EBSP indicating the limit of the quality
sample features can be approximated/simulated through tweaking of which can be handled by the current model. However, this limit is pri­
the EBSP acquisition conditions from a sample with high quality EBSPs marily determined by the number of such poor examples in the training
or digitally adding suitable noise to acquired high quality EBSPs. High data set and the performance of the model can be improved by adding
degree of remnant plastic strain in the sample can cause the EBSPs to be more of such EBSPs to the training data set. However, the performance
blurred, which can be partially simulated through one or combination of gains achieved even at this level of training conditions is considered
different strategies such as employing larger value of detector binning, better than many of the existing methods.
lower value of detector exposure times, distorted electron beam and/or Different loss functions such as L, L2, patch GAN etc were employed.
applying gaussian filters and/ or noise to the acquired high quality While L1 and L2 losses have shown continued improvement in the per­
EBSPs (of reference samples with high quality EBSPs). High speed formance (i.e., monotonic decreases in losses) as a function of training
acquisition leads to increase in the random noise in the EBSPs, which can steps, GAN loss did not show consistent improvement in performance
be simulated or approximated by low exposure, higher binning, etc. with the progress of training steps. Fig. 5, shows the evolution of the
Thus, in the current work, unavailability of ground truth images for the training loss and corresponding model performance for the case of
noisy EBSPs was circumvented by collecting high quality EBSPs (to act model with 100% weightage to L2 loss. Among the different loss function
as ground truth) and tweaking the acquisition conditions to introduce employed, it was observed that results from the L2 loss function appear
noise in the pattern that mimic the noise expected in the realistic sam­ to be most well suited to the current requirement of the task. This point
ples of interest (samples with poor EBSPs on account of fine grains, is made clear by looking at the Fig. 6, where in the results from the
remnant plastic strain, etc.) as demonstrated in Fig. 3. model using various loss functions in translating a noisy (or poor) EBSP
Only the data points that satisfied stringent quality metrics (for have been shown. It is clear that L2 loss function outperformed other
ground truth EBSPs) were included in the training data set. These met­ types of losses. While inferred images from models based on L1 and L2
rics included confidence index, CI (>0.8) and fit (<0.5). Note that we losses were similar in features, the contrast of the generated EBSPs was
have reliable values of CI and Fit only for reference scan (ground truth found to be higher in case of L2 loss function. Moreover, previous works
indicate that L2 loss is better in preserving the linear features [23] and
scan, PR(i,j) ) and cross reference scan (PR(i,j) ), while majority of the inter­

rendering sharper images which is a desirable quality in the current task.


mediate scans (with poor EBSPs) will not have their corresponding
Hence for actual testing in the rest of the manuscript, L2 loss function
points with any meaningful CI and Fit values due to poor EBSP quality. A
was employed.
data point from a reference EBSD scan at a given pixel (ground truth)
and its corresponding poor quality EBSP examples, is considered for
4. Application of ML processing to full scale EBSD scan
inclusion in training data set if corresponding point from the cross-
reference scan had misorientation less than 0.5◦ from the reference
The performance of the current ML processing method is tested on a
point and simultaneously satisfies the above listed quality metrics. For
full scale EBSD map data. This test is also used to verify if the current ML
each of such point, several input-ground truth pairs were prepared by
processing introduced any artefacts leading to undesirable deviations of
drawing EBSPs from intermediate scans with non-optimal EBSP acqui­
the final orientation solution. For this, an entirely different EBSD scan
sition conditions, and applying some random distortions on the EBSPs.
whose data points have never been exposed to ML model during any
Some high quality EBSPs were included in the training data set without
phase of training was used. The EBSP data of an entire EBSD map was fed
incorporating any distortions and were duplicated as ground truth im­
to the ML model and inferred EBSPs were saved back in the format TSL-
ages. This is to ensure that model should learn to modify only the ones
OIM® (EBSD analysis software) can handle i.e., HDF (Hierarchical data
that require improvement but leave the good EBSPs practically un­
format) format for further analysis. Again, map data is collected in 2
touched. In addition to random gaussian blur, gaussian noise, salt pep­
settings (optimal for ground truth and non-optimal for testing the ML
per noise, random flipping, rotation of the training examples was
model). The result of applying the ML processing on entire set of EBSP is
incorporated during the training to increase the diversity of the training
presented in Fig. 7. As can be seen, the non-optimal scan with its poor
data set and ensure improved generality of the model.
EBSPs had a low hit rate of 6% (number of points with confidence index

5
K.V.M. Krishna et al. Ultramicroscopy 247 (2023) 113703

Fig. 4. Depiction of gradual improvement in the inference/prediction capability of the model during the learning on test data. (a), (b) are successful cases where de-
noising has been very effective. (c) an example with severe noise where model failed to denoise satisfactorily.

Fig. 5. Training curve (training loss during the training (as a function of number of training steps)) along with the model performance improvement as measured by
metric SSIM. The snapshots of the predictions from the model at various stages of training curve are superimposed as insets. All these insets (i.e., intermediate
predictions) correspond to an arbitrarily chosen test EBSP (shown in the right side of the figure). As explained, this EBSP is part of the test data that is never exposed
to model during the training phase.

greater than 0.1) and virtually illegible IQ (Image quality, measure of attributed to the fact that overlap of adjoining patterns at the sub
the quality of the EBSP) maps. The ground truth maps on the other hand boundaries and such points are far fewer in number in the training data
did reveal grains with high degree of confidence index (mean CI=0.89). set. These factors make the inference of such EBSPs far more difficult for
The ML processed patterns were re-imported into TSL-OIM software and the model. However, this limitation can be addressed by increasing more
re-indexed using the same settings (Fig. 7(c)) as were used in the case of number of such cases in the training data set and will be taken up in
Fig. 7(a) and (b). The marked improvement in the hit rate (80%) from upcoming work. Despite this limitation, the extent of improvement
Fig. 7(a) can be seen. While most of the points were indexed correctly brought out by ML processing of the EBSP is quite evident and offers a
within 2◦ from the ground truth orientations, few of the points especially robust alternative to other noise reduction methods [4,27,28].
at sub grain boundaries were found to have wrong indexing. This can be Another noteworthy improvement brought out by the ML processing

6
K.V.M. Krishna et al. Ultramicroscopy 247 (2023) 113703

Fig. 6. Performance of the ML model under different loss functions. It is evident that L2 loss function had generated images closest to the ground truth images.

Fig. 7. Illustration of the indexing improvement brought out by the ML processing. (a) As acquired poor (non-optimal EBSP) EBSD data (b) ground truth EBSD data
(with optimal acquisition parameters). (c) Re-indexed maps from the ML processed EBSPs of (a). While the indexing success rate (points with confidence index higher
than 0.1) had improved to 80 in (c) from 6% (of raw data, (a)), the similarity of orientations between ground truth data (b) and (c) is evident. Corresponding IQ maps
have been presented in the bottom row for each case.

can be observed in the IQ maps. As can be observed, the quality orientations and grain sizes in major proportion of the maps (excluding
improvement in IQ maps can be seen to surpass corresponding ground sub grain boundaries where pattern overlap is limiting the model’s
truth data maps (Fig. 7b and c). The contrast between the intra grain and ability to de-noise the patterns) is a big improvement in the micro­
grain boundary regions in ML processed data set is much higher. This is structural analysis and often likely to satisfy major requirements in
also a consequence of the excellent improvement in EBSP features and many cases.
contrast for intra grain pixels and very poor inference in case of grain It may be acknowledged here that the application of the ML methods
boundary pixels by the current ML processing. Thus, inability of the for solving the EBSPs had been proposed recently by Ding et al [6].
model to handle grain boundary pixels, i.e., EBSPs from the grain However, their approach was to handle the entire pipeline of processing
boundary regions, actually improved the IQ contrast of the processed and indexing through the deep learning ML model. It may be noted that
data to a higher value than the corresponding ground truth scans. Since in such an approach, the model needs to be trained for every phase
the aim of the majority of the EBSD analysis work is to capture the intra separately, and is very opaque to user, rendering inspection or finding
granular orientation features and the trends in them, improvement out of the artefacts/deviation introduced by ML rather difficult. Current
brought about by the current approach, notwithstanding limitations in approach on the other hand, can handle any crystallographic phase,
handling grain boundary pixels, should help achieve better analysis wider range and far severe noise and outputs EBSP which can be
results in otherwise difficult to index samples. This limitation will have inspected for any deviations/artefacts and takes the advantage of
higher impact for estimations of aspects like geometrically necessary existing robust methods of pattern indexing algorithms for actual solu­
dislocation densities and kernel average misorientations as they require tion of the EBSPs once the required quality of the EBSP is achieved.
accurate orientation solutions at the interfaces too. Nonetheless, it must Additionally, current approach should seamlessly work in the case of
be recognized that in case of the samples or scenarios where regular multiphase materials as the method is agnostic to the underlying phases
indexing itself is not possible, being able to accurately derive concerned. It is also expected to work better in handling materials with

7
K.V.M. Krishna et al. Ultramicroscopy 247 (2023) 113703

pseudosymmetry challenges. Current work, to the best of the knowledge [3] S. Singh, M. de Graef, Automated dictionary-based indexing of electron channeling
patterns, Microsc. Microanal. 21 (2015), https://doi.org/10.1017/
of the authors is the first-time application of the powerful generative
S1431927615010983.
deep learning models for the characterization or measurement of crys­ [4] S. Singh, Y. Guo, B. Winiarski, T.L. Burnett, P.J. Withers, M. de Graef, High
tallographic orientation in materials. However, in order to harness the resolution low kV EBSD of heavily deformed and nanocrystalline Aluminium by
full power of GAN networks for de-noising the EBSPs and rendering dictionary-based indexing, Sci. Rep. 8 (2018), https://doi.org/10.1038/s41598-
018-29315-8.
difficult to characterize the samples accessible for better characteriza­ [5] F. Ram, S. Wright, S. Singh, M. de Graef, Error analysis of the crystal orientations
tion few more challenges have to be addressed. Some of these challenges obtained by the dictionary approach to EBSD indexing, Ultramicroscopy (2017)
include, but not limited to, training data covering far more severe and 181, https://doi.org/10.1016/j.ultramic.2017.04.016.
[6] Z. Ding, C. Zhu, M. de Graef, Determining crystallographic orientation via hybrid
realistic noise scenarios (such EBSPs with shadows for example), and convolutional neural network, Mater. Charact. 178 (2021), https://doi.org/
wider range of EBSP acquisition settings (different cameras, microscopes 10.1016/j.matchar.2021.111213.
etc). With the remarkable pace of research efforts in the field of ML in [7] Z. Ding, E. Pascal, M. de Graef, Indexing of electron back-scatter diffraction
patterns using a convolutional neural network, Acta Mater. 199 (2020), https://
general and deep learning in particular, it can be anticipated that several doi.org/10.1016/j.actamat.2020.08.046.
more powerful, robust, and general-purpose ML models will be devel­ [8] S.I. Wright, B.L. Adams, Automatic analysis of electron backscatter diffraction
oped in near future and can be deployed with the minimal effort using patterns, Metal. Trans. A 23 (1992), https://doi.org/10.1007/BF02675553.
[9] F.Y. Zhou, L.P. Jin, J. Dong, Review of convolutional neural network, Jisuanji
the same EBSP training ‘data’ that was employed in the current training Xuebao/Chin. J. Comput. 40 (2017) 1229–1251, https://doi.org/10.11897/SP.
phase of the present model. J.1016.2017.01229.
[10] H.-J. Yoo, Deep convolution neural networks in computer vision: a review, IEIE
Trans. Smart Process. Comput. 4 (2015) 35–43, https://doi.org/10.5573/
5. Summary
ieiespc.2015.4.1.035.
[11] S. Pouyanfar, S. Sadiq, Y. Yan, H. Tian, Y. Tao, M.P. Reyes, M.L. Shyu, S.C. Chen, S.
• A deep learning model of conditional generative adversarial network S. Iyengar, A survey on deep learning: algorithms, techniques, and applications,
has been employed for the first time, to de-noise the poor quality ACM Comput. Surv. 51 (2018), https://doi.org/10.1145/3234150.
[12] L. Jiao, J. Zhao, A survey on the new generation of deep learning in image
EBSPs to enable improved EBSD indexing from difficult to charac­ processing, IEEE Access 7 (2019), https://doi.org/10.1109/
terize samples. ACCESS.2019.2956508.
• Training data is prepared from experimental sets of noisy and cor­ [13] L.C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, A.L. Yuille, DeepLab: semantic
image segmentation with deep convolutional nets, atrous convolution, and fully
responding high fidelity EBSPs in the order of several thousand ex­ connected CRFs, IEEE Trans. Pattern Anal. Mach. Intell. 40 (2018) 834–848,
amples covering a wider variety and larger extent of realistic noises https://doi.org/10.1109/TPAMI.2017.2699184.
and successful EBSD indexing was achieved through ML based EBSP [14] B. Neupane, T. Horanont, J. Aryal, Deep learning-based semantic segmentation of
urban features in satellite images: a review and meta-analysis, Remote Sens. (Basel)
refinement. (2021) 13, https://doi.org/10.3390/rs13040808.
• A significant improvement in the pattern indexing rate (hit rate, [15] J. Li, F. Jiang, J. Yang, B. Kong, M. Gogate, K. Dashtipour, A. Hussain, Lane-
defined as the percentage of points with confidence index greater DeepLab: lane semantic segmentation in automatic driving scenarios for high-
definition maps, Neurocomputing (2021) 465, https://doi.org/10.1016/j.
than 0.1) coupled with improved pattern fit was achieved using this neucom.2021.08.105.
technique. [16] J. Pawłowski, S. Majchrowska, T. Golan, Generation of microbial colonies dataset
• The ML processing was more successful in de-noising distorted EBSPs with deep learning style transfer, Sci. Rep. 12 (2022), https://doi.org/10.1038/
s41598-022-09264-z.
acquired under non optimal acquisition settings from grain interiors
[17] Z. Wang, J. Chen, S.C.H. Hoi, Deep learning for image super-resolution: a survey,
compared to the grain boundaries and sub boundaries. This was IEEE Trans. Pattern Anal. Mach. Intell. 43 (2021), https://doi.org/10.1109/
owing to larger population of the training examples from the grain TPAMI.2020.2982166.
interior as compared to the boundaries. [18] J. Jiang, C. Wang, X. Liu, J. Ma, Deep learning-based face super-resolution: a
survey, ACM Comput. Surv. 55 (2023) 1–36, https://doi.org/10.1145/3485132.
[19] C. Chen, Z. Xiong, X. Tian, Z.J. Zha, F. Wu, Real-world image denoising with deep
Declaration of Competing Interest boosting, IEEE Trans. Pattern Anal. Mach. Intell. 42 (2020) 3071–3087, https://
doi.org/10.1109/TPAMI.2019.2921548.
[20] C. Senaras, K.K.N. Muhammad, B. Sahiner, M.P. Pennell, G. Tozbikian, G. Lozanski,
The authors declare that they have no known competing financial M.N. Gurcan, Optimized generation of high-resolution phantom images using
interests or personal relationships that could have appeared to influence cGAN: application to quantification of Ki67 breast cancer images, PLoS One 13
the work reported in this paper. (2018), https://doi.org/10.1371/journal.pone.0196846.
[21] Z. Wang, Q. She, T.E. Ward, Generative adversarial networks in computer vision: a
survey and taxonomy, ACM Comput. Surv. 54 (2021), https://doi.org/10.1145/
Data availability 3439723.
[22] A. Aggarwal, M. Mittal, G. Battineni, Generative adversarial network: an overview
of theory and applications, Int. J. Inf. Manage. Data Insights 1 (2021), https://doi.
Data will be made available on request. org/10.1016/j.jjimei.2020.100004.
[23] P. Isola, J.Y. Zhu, T. Zhou, A.A. Efros, Image-to-image translation with conditional
adversarial networks, in: Proceedings - 30th IEEE Conference on Computer Vision
and Pattern Recognition, CVPR 2017, Institute of Electrical and Electronics
Acknowledgments
Engineers Inc., 2017, pp. 5967–5976, https://doi.org/10.1109/CVPR.2017.632.
[24] B. Pang, E. Nijkamp, Y.N. Wu, Deep learning with TensorFlow: a review, J. Educ.
Authors acknowledge the infrastructure and support of Center for Behav. Stat. 45 (2020) 227–248, https://doi.org/10.3102/1076998619872761.
Agile and Adaptive and Additive Manufacturing (CAAAM) funded [25] Z. Wang, A.C. Bovik, H.R. Sheikh, E.P. Simoncelli, Image quality assessment: from
error visibility to structural similarity, IEEE Trans. Image Process. 13 (2004)
through State of Texas Appropriation: 190405-105-805008-220 and 600–612, https://doi.org/10.1109/TIP.2003.819861.
Materials Research Facility (MRF) at the University of North Texas for [26] Y.H. Chen, S.U. Park, D. Wei, G. Newstadt, M.A. Jackson, J.P. Simmons, M. de
access to microscopy and phase analysis facilities. Graef, A.O. Hero, A dictionary approach to electron backscatter diffraction
indexing, Microsc. Microanal. 21 (2015) 739–752, https://doi.org/10.1017/
S1431927615000756.
References [27] S.I. Wright, M.M. Nowell, S.P. Lindeman, P.P. Camus, M. de Graef, M.A. Jackson,
Introduction and comparison of new EBSD post-processing methodologies,
[1] V. Randle, Electron backscatter diffraction: strategies for reliable data acquisition Ultramicroscopy 159 (2015) 81–94, https://doi.org/10.1016/J.
and processing, Mater. Charact. 60 (2009), https://doi.org/10.1016/j. ULTRAMIC.2015.08.001.
matchar.2009.05.011. [28] P.T. Brewick, S.I. Wright, D.J. Rowenhorst, NLPAR: non-local smoothing for
[2] T.B. Britton, V.S. Tong, J. Hickey, A. Foden, A.J. Wilkinson, AstroEBSD: exploring enhanced EBSD pattern indexing, Ultramicroscopy 200 (2019), https://doi.org/
new space in pattern indexing with methods launched from an astronomical 10.1016/j.ultramic.2019.02.013.
approach, J. Appl. Crystallogr. 51 (2018), https://doi.org/10.1107/
S1600576718010373.

You might also like