Code Monkey home page Code Monkey logo

vigneashpandiyan / additive-manufacturing-ded-contrastive-learners Goto Github PK

View Code? Open in Web Editor NEW
2.0 1.0 1.0 35 KB

In Situ Quality Monitoring in Direct Energy Deposition Process using Co-axial Process Zone Imaging and Deep Contrastive Learning

Home Page: https://www.sciencedirect.com/science/article/pii/S1526612522004996?via%3Dihub

Python 100.00%
3d-printing acoustic-emission additive-manufacturing ccd contrastive-learning ded image-processing lpbf process-monitoring pytorch

additive-manufacturing-ded-contrastive-learners's Introduction

Additive-Manufacturing-Contrastive-Learners

This repo hosts the codes that were used in journal work "In Situ Quality Monitoring in Direct Energy Deposition Process using Co-axial Process Zone Imaging and Deep Contrastive Learning".

Journal link

https://doi.org/10.1016/j.jmapro.2022.07.033

DED Process

Overview

Contrastive Learners help learn mappings from input space to a compact Euclidean space where distances correspond to similarity measures. In the article recently published in "Journal of Manufacturing Processes [impact factor 5.6]" SME, we propose a strategy both in a supervised and semi-supervised manner to monitor the quality of the part built across the possible process spaces that could be simulated on Ti6Al4V grade 1 in a commercial L-DED machine from BeAM Machines. The optical emissions from the process zone, which are imaged co-axially, were distinguished using two deep learning-based contrastive learning frameworks and a methodology to track workpiece manufacturing quality in real-time is demonstrated. Considering the complicated melt pool morphology across process space in the L-DED process, similarity scores from the contrastive learners could also be used in other downstream tasks such as process control apart from process monitoring.

Contrastive Learners

Contrastive learning is a part of the ML paradigm that enables neural networks to learn without labelling information based on similarities and dissimilarities in data from predefined dataset categories. The core methodology of contrastive learning is that instead of training a network on an image and corresponding ground truth, pairs of images are passed into the network. The network's convolution layers generate a lower-dimensional representation of the images that can be compared using a loss function. The network weights are updated to reduce the distance metric if the images are alike and increased if they are distinct. The trained contrastive model gives a refined lower-dimensional representation which can be further used for classification, segmentation and verification. Two losses are commonly used in the contrastive learning paradigm: contrastive and triplet loss. However, the application of the losses depends primarily on the way the network is trained. The idea behind contrastive loss is the assignment of a Boolean value to a pair of images, i.e., 1 in case they belong to the same category (x, x^+), 0 in case if they are from different categories (x, x^-). During the training, the lower-dimensional representations of images are computed (f(x),f(x^+ ))or (f(x),f(x^- )) and are mutually compared using the aforementioned contrastive loss function, as shown in equation below

Firstequation

where Y is the Boolean label, D_+ and D_- are the distance metrics, and m is the constant margin. The contrastive loss penalizes the high distance metric in samples with Boolean value 1. It also penalizes if the distance metric is lower in samples with Boolean value 0 based on a margin. In other words, the losses are to be minimized if the images are similar and maximized if they are not. The CNN network training with contrastive loss involves taking two instances of the same model with the same architecture and weights, as shown in Figure below. For each iteration, the model is passed with pairs of images with Boolean values 1 or 0. The loss is calculated by comparing the output layer, and the network weights are adjusted accordingly to reduce the loss.

Fig 2

In case of triplet loss, the image triplets are passed into CNN, namely anchor (x), positive (x^+) and negative (x^-). The anchor image serves as a reference, while positive and negative images are correspondingly taken from the same and different categories. The triplet loss minimize the distance between the low-dimensional representation in the anchor f(x) and positive f(x^+), at the same time, maximizing the distance between the anchor f(x) and negative f(x^-). The triplet loss is defined as shown in equation below,

Second equation

where D_+ is the distance between the positive image and the anchor, D_- is the distance between the negative image and the anchor, and m is a constant margin to differentiate the positive and negative regions. For getting good predictions, the distances D_+ (f(x),f(x^+)) and D_- (f(x),f(x^-)) has to be lower and higher, respectively. The CNN training with triplet loss involves three instances of the same model that share the same architecture and weights, as shown in Figure below. Each model instance is fed with the anchor, positive and negative images. The triplet loss is calculated at each iteration by comparing the lower-dimensional representation in the output layers, and the network weights are adjusted accordingly to reduce the loss.

Fig 3

Code

git clone https://github.com/vigneashpandiyan/Additive-Manufacturing-Contrastive-Learners
cd Additive-Manufacturing-Contrastive-Learners
python Main_Siamese.py
python Main_Triplet.py

Citation

@article{pandiyan2022situ,
  title={In situ quality monitoring in direct energy deposition process using co-axial process zone imaging and deep contrastive learning},
  author={Pandiyan, Vigneashwara and Cui, Di and Le-Quang, Tri and Deshpande, Pushkar and Wasmer, Kilian and Shevchik, Sergey},
  journal={Journal of Manufacturing Processes},
  volume={81},
  pages={1064--1075},
  year={2022},
  publisher={Elsevier}
}

additive-manufacturing-ded-contrastive-learners's People

Contributors

vigneashpandiyan avatar

Stargazers

 avatar  avatar

Watchers

 avatar

Forkers

12solo

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.