PHYSICS INFORMED NEURAL NETWORKs (PINNs)
ABSTRACT
A physics-informed neural network is developed to solve conductive heat transfer partial differential equation (PDE), along with convective heat transfer PDEs as boundary conditions (BCs), in manufacturing and engineering applications .Since convective coefficients are typically unknown, current analysis approaches based on trial and error finite element (FE) simulations are slow. The loss function is defined based on errors to satisfy PDE, BCs and initial condition. An adaptive normalizing scheme is developed to reduce loss terms simultaneously. In addition, theory of heat transfer is used for feature engineering.
The rise of Machine Learning (ML) and AI in recent years offer an opportunity to develop fast surrogate ML models to replace traditional FE tools in manufacturing and general engineering applications. Several approaches have been explored in the literature in recent years.
Understanding an Artificial Neural Network (ANN)
An ANN has hundreds or thousands of artificial neurons called processing units, which are interconnected by nodes. These processing units are made up of input and output units. The input units receive various forms and structures of information based on an internal weighting system, and the neural network attempts to learn about the information presented to produce one output report. Just like humans need rules and guidelines to come up with a result or output, ANNs also use a set of learning rules called backpropagation, an abbreviation for backward propagation of error, to perfect their output results.
loss function
A loss function is used to optimize the parameter values in a neural network model. Loss functions map a set of parameter values for the network onto a scalar value that indicates how well those parameter accomplish the task the network is intended to do.
Back-propagation
Back-propagation is the essence of neural net training. It is the method of fine-tuning the weights of a neural net based on the error rate obtained in the previous epoch (i.e., iteration). Proper tuning of the weights allows you to reduce error rates and to make the model reliable by increasing its generalization.
Physics informed neural networks (PINNs)
Physics informed neural networks (PINNs) are deep learning based techniques for solving partial differential equations (PDEs) encounted in computational science and engineering. Guided by data and physical laws, PINNs find a neural network that approximates the solution to a system of PDEs. Such a neural network is obtained by minimizing a loss function in which any prior knowledge of PDEs and data are encoded.
We introduce physics informed neural networks — neural networks that are trained to solve supervised learning tasks while respecting any given law of physics described by general nonlinear partial differential equations.
PINN is developed in this study to solve the heat transfer PDE with convective BCs in a representative manufacturing setting.Physics-informed features are engineered based on the theory of heat transfer to accurately represent the underlying physics using the trained PINN. It is shown that using this approach, a PINN can predict heat transfer even outside its training zone. The developed PINN allows for fast evaluation of heat transfer problems with various convective BCs. In an industry, this leads to development of near real-time feedback control loops to adjust the process parameters and control the temperature history of the part
2. Method
The general heat transfer equation for a given part can be written as :
In which T is the temperature, ρ is the part density, Cp is the part specific heat capacity, k is the part conductivity and Q . is the rate of heat generation in the part. The general heat equation can be simplified to represent the case of the one-dimensional heat transfer with no heat generation term:
The convective boundary condition is written as:
Where h is the convective Heat Transfer Coefficient (i.e. BC), T∞ is the air temperature around the part and T” boundary, is the part temperature at its surface. Suppose that the prediction made by a neural network, f(x,t, ℎ1, ℎ2), is intended to be a solution to the one-dimensional heat equation for any given boundary condition. The solution’s adherence to the heat transfer PDE at any given point can be quantified as:
If the prediction made by the neural network is a perfect solution to boundary conditions, these error terms will be zero at any given time and for any given heat transfer coefficient. Heat transfer problems are frequently solved with an Initial Condition (IC), such as the one dimensional solid being in thermal equilibrium at the beginning of the problem. One productive way to think of an IC is by recognizing that the IC simply describes a boundary condition applied at the time-dimension boundary t= 0,Errror for boundary condition zero is mentioned above as error.bc0.
For a collection of training data points, the loss term for training the neural network can be defined as:
Where lambda values are scaling factors to normalize loss terms. Each term of the loss function is designed to calculate the mean square of the error term over the points for which that error term was evaluated For a neural network that perfectly represents the solution to the one dimensional heat equation, all of the loss values, evaluated over any arbitrary sets of points will sum to zero. If the magnitude of one loss function is significantly greater than the magnitude of the others, or if the sensitivity of one loss function to a change in weightings is significantly greater than the sensitivity of the other loss functions to a change in weightings, the neural network will train to a solution that minimizes one of the loss functions but is not apparently influenced by the other loss functions.
Implementation and Training of a Physics-Informed Neural Network
Training of a PINN to solve the heat transfer PDE with convective BCs was implemented in Python (V3.6.8), using Tensorflow and Keras libraries (V2.10). The training of PINN was based on selecting random (x,t, ℎ) batches in each epoch and minimizing the loss function (Equation 8) using built-in Keras optimizers to obtain weights and biases of the neural network to satisfy heat transfer PDE. Based on performing a grid search, Adam optimizer with a learning rate of 0.0001 was used with a batch size of 150 for each loss term. In most cases, an 100K epochs were used for training.
In the absence of heat generation and convection, the heat equation can be solved analytically by separation of variablesm Where An are weights and n is an integer. In the presence of heat generation or convection, an analytic solution would take a different form if the problem were analytically solvable.
CONCLUSION
This brief review concludes the successful implementation of ANN in difficult and complex heat transfer problems in the field of energy systems, hear exchangers, gas-solid fluidized beds, along with the authors own study of ANN implementation in gas-solid fluidized bed heat transfer. The basic structure and methodology of ANN implementation is discussed in general. The ANN modeling is explained in basic heat transfer areas in steady state and dynamic thermal modeling in general heat transfer applications. ANN results are shown in terms of accuracy and flexibility in its use, and also their computational and experimental validations. Thermal engineering analysis requires tedious equations and correlations to develop to satisfy the fundamental principles of the physical system which can be analyzed in a simple manner by implementing the ANN approach. The study shows that analysis with less and noisy input data and even non-linear relationship behavior can be properly fitted in ANN modeling. It is one of the easy ways to implement with multiple response computations and complex thermal systems. Based on the results achieved by researchers in their analysis, it can be concluded that the BP algorithm is the powerful learning algorithm with feed-forward structure in many heat transfer applications. These models provide better prediction with reduced standard and mean deviations. The regression value of R=1 obtained in training the network in many cases and in other some cases this value ranged from 0.899 to 0.999, strongly support that the network predictions are found to be in good agreement with the experimentally observed values. Once the ANN model trained for a particular thermal process, a reliable and quick response is possible even we can continue the updating these models for the changes in the system.
an analytic solution would take a different form if the problem were analytically solvable