**Number of sessions: **6 sessions (two hours per session)

Artificial neural networks (ANNs) have become increasingly popular in recent years due to their ability to solve complex problems. However, the use of traditional ANNs is based on data collection, which is often not available. On the other hand, engineering problems are often formulated by partial differential equations (PDEs), and neural networks cannot have an understanding of the differential equations and physics governing these problems. To overcome these challenges, physics-informed neural networks (PINNs) have been developed. PINNs, by combining the power of ANNs with the governing physical equations, aim to provide accurate and low-cost solutions without the need for data and with an understanding of the governing equations of the problems. In PINNs, instead of directly solving PDEs, a neural network is tasked with learning functions that satisfy the equations and boundary conditions. This process is done through loss function optimization that minimizes the error between the neural network output and the equations. PINNs are currently being used in a wide range of problems including structural analysis, fluid flow simulation, porous media analysis, and more.

**Reasons to learn PINNs**

**- Ability to solve complex problems:** PINNs can be used to solve problems that are difficult or impossible to solve analytically or numerically.

**- No need for data for training:** PINNs do not necessarily require data for training, which makes them suitable for problems where data collection is difficult or expensive. However, if data is available, it can be used to increase the accuracy of the network.

**- Interpretable results:** Unlike traditional artificial intelligence methods, the results obtained from PINNs are inherently interpretable, which helps to better understand the physical phenomena being studied.

**- Increased research capability:** PINN neural networks are an emerging research area with high potential for scientific discovery. By learning PINNs in this course, you will gain the skills to write strong scientific papers and proposals at the forefront of knowledge and join the pioneers in this field.

**Who is this course for?**

- University students, professors, and researchers

- People who work with differential equations in any way.

- Machine learning and artificial intelligence enthusiasts

- University students, professors, and researchers

- People who work with differential equations in any way.

- Machine learning and artificial intelligence enthusiasts

1.2. Heat transfer differential equation and its coefficients

1.2.1. Introduction of the differential equation

1.2.2. Boundary and initial conditions

1.2.3. Different types of numerical differentiation

1.2.4. Discretization of the differential equation in space and time

1.2.5. Solving the differential equation using the finite difference method in Python

1.2.6. Review of results

1.3. Application of neural networks to solve differential equations

1.4. Converting a differential equation into an optimization problem

1.5. Cost function

1.5.1. Cost due to the neural network's error in estimating the results

1.5.2. Cost due to boundary conditions

1.5.3. Cost due to initial conditions

1.5.3. Cost due to available possible data

1.6. How to train a neural network using cost function minimization

2.1.1. Why to use Anaconda?

2.1.2. Creating an environment

2.1.2. Installing the required packages

2.1.2. Launching Spyder within the environment

2.2. Burgers differential equation

2.2.1. Introduction of the Burgers differential equation and its applications

2.2.2. Boundary and initial conditions

2.2.3. General solution of the differential equation

2.3. Steps for solving the differential equation using PINN

2.4. Artificial neural network architecture in Pytorch

2.5. Defining variables and discretizing them

2.6. Defining boundary and initial conditions and applying them to grid points

2.7. Defining the optimization algorithm

2.7.1. How to use more than one optimization?

2.7.2. Adam optimization algorithm and its implementation in PyTorch

2.7.3. L-BFGS optimization algorithm and its implementation in PyTorch

3.1. Definition of the cost function

3.1.1. How to Calculate the First and Second Derivatives of a Neural Network

3.1.2. Taking Derivatives of a Neural Network in PyTorch

3.1.3. Calculating the Cost of Neural Network Error

3.1.4. Calculating the Cost of Boundary Conditions

3.1.5. Calculating the Cost of Initial Conditions

3.1.6. Calculating the Total Costpen_spark

3.2. Training the Neural Network

3.2.1. Defining the Functions and Parameters for Training

3.2.2. Training the Neural Network with the Adam Optimization Algorithm

3.2.3. Fine-tuning the Neural Network with the L-BFGS Optimization Algorithm

3.3. Analyzing PINN Outputs

3.3.1. Generating High-Resolution Grid Points Across the Domain

3.3.2. Extracting the PINN Solution at Grid Points

3.3.3. Plotting Contour Plots of the Results

3.3.4. Plotting Plots at Different Time Steps

3.1.1. How to Calculate the First and Second Derivatives of a Neural Network

3.1.2. Taking Derivatives of a Neural Network in PyTorch

3.1.3. Calculating the Cost of Neural Network Error

3.1.4. Calculating the Cost of Boundary Conditions

3.1.5. Calculating the Cost of Initial Conditions

3.1.6. Calculating the Total Costpen_spark

3.2. Training the Neural Network

3.2.1. Defining the Functions and Parameters for Training

3.2.2. Training the Neural Network with the Adam Optimization Algorithm

3.2.3. Fine-tuning the Neural Network with the L-BFGS Optimization Algorithm

3.3. Analyzing PINN Outputs

3.3.1. Generating High-Resolution Grid Points Across the Domain

3.3.2. Extracting the PINN Solution at Grid Points

3.3.3. Plotting Contour Plots of the Results

3.3.4. Plotting Plots at Different Time Steps

4.1. Defining the Differential Equation

4.1.1. Introducing a 2D Differential Equation

4.1.2. Introducing Spatial and Temporal Terms in the Differential Equation

4.1.3. Examining Boundary and Initial Conditions

4.2. Solution Steps for the Differential Equation

4.3. ِDefining the Artificial Neural Network

4.3.1. Defining the Neural Network Architecture

4.3.2. Implementing the Neural Network using the nn.Module Class

4.4. Generating Computational Points

4.4.1. Generating Computational Points Across the Computational Domain

4.4.2. Generating Computational Points on Model Boundaries

4.4.3. Generating Random Points Along the Time Axis

4.4.4. Assigning Values to All Computational Points in the Solution Space and on Model Boundaries

4.1.1. Introducing a 2D Differential Equation

4.1.2. Introducing Spatial and Temporal Terms in the Differential Equation

4.1.3. Examining Boundary and Initial Conditions

4.2. Solution Steps for the Differential Equation

4.3. ِDefining the Artificial Neural Network

4.3.1. Defining the Neural Network Architecture

4.3.2. Implementing the Neural Network using the nn.Module Class

4.4. Generating Computational Points

4.4.1. Generating Computational Points Across the Computational Domain

4.4.2. Generating Computational Points on Model Boundaries

4.4.3. Generating Random Points Along the Time Axis

4.4.4. Assigning Values to All Computational Points in the Solution Space and on Model Boundaries

5.1. Defining the Cost Function

5.1.1. Calculating the Neural Network Output at Computational Points

5.1.2. Calculating Neural Network Derivatives at Computational Points

5.1.3. Calculating the Cost of Neural Network Error in Approximating the PDE Solution

5.1.4. Calculating the Cost of Boundary and Initial Condition Approximation

5.1.5. Calculating the Total Cost

5.1.1. Calculating the Neural Network Output at Computational Points

5.1.2. Calculating Neural Network Derivatives at Computational Points

5.1.3. Calculating the Cost of Neural Network Error in Approximating the PDE Solution

5.1.4. Calculating the Cost of Boundary and Initial Condition Approximation

5.1.5. Calculating the Total Cost

5.2. Training the Model

5.2.1. Defining the Optimization Algorithm and Setting Parameters

5.2.2. Determining the Number of Solution Steps

5.2.3. Training the Model by Minimizing the Cost Function

5.3. Analyzing PINN Outputs

5.3.1. Generating High-Resolution Grid Points Across the Domain

5.3.2. Extracting the PINN Solution at Grid Points

5.3.3. Plotting Contour Plots of Results at Different Time Steps

6.1.1. Overview of the Library and Its Capabilities

6.1.2. A Brief History of the DeepXDE Library

6.1.3. Installing the DeepXDE Library in a Python Environment

6.1.4. Modeling Steps in DeepXDE

6.2. Defining a Time-Dependent Differential Equation for Solution with DeepXDE

6.2.1. Introduction to the Differential Equation

6.2.2. Defining Parameters Required in DeepXDE

6.2.3. Spatiotemporal Computational Domain

6.2.4. Defining Boundary and Initial Conditions

6.3. Implementing the Differential Equation in DeepXDE

6.3.1. Defining Neural Network Derivatives Using Jacobian and Hessian Operators

6.3.2. Implementing the Differential Equation in DeepXDE

6.4. Training the Model

6.4.1. Generating a Random Grid of Points Across the Computational Domain and on Boundaries

6.4.2. Defining and Implementing the Neural Network Architecture

6.4.3. Defining the Adam Optimizer

6.4.4. Defining the L-BFGS-B Optimizer

6.4.5. Training the Neural Network by Running Optimization

6.5. Analyzing and Interpreting PINN Outputs

6.5.1. Plotting and Analyzing the Learning Curve

6.5.2. Plotting the Spatiotemporal u-Field from the PINN Output

6.6. Further Research Directions with DeepXDE

6.2.1. Introduction to the Differential Equation

6.2.2. Defining Parameters Required in DeepXDE

6.2.3. Spatiotemporal Computational Domain

6.2.4. Defining Boundary and Initial Conditions

6.3. Implementing the Differential Equation in DeepXDE

6.3.1. Defining Neural Network Derivatives Using Jacobian and Hessian Operators

6.3.2. Implementing the Differential Equation in DeepXDE

6.4. Training the Model

6.4.1. Generating a Random Grid of Points Across the Computational Domain and on Boundaries

6.4.2. Defining and Implementing the Neural Network Architecture

6.4.3. Defining the Adam Optimizer

6.4.4. Defining the L-BFGS-B Optimizer

6.4.5. Training the Neural Network by Running Optimization

6.5. Analyzing and Interpreting PINN Outputs

6.5.1. Plotting and Analyzing the Learning Curve

6.5.2. Plotting the Spatiotemporal u-Field from the PINN Output

6.6. Further Research Directions with DeepXDE

Pre-registration

The tuition fee for this course is **300 USD**. Upon pre-registration, the course administrator will email you and provide a deposit slip for payment. You have the option to pay the course fee through bank e-transfer or PayPal.

Certainly! We welcome students to join our courses and embark on the path to becoming professional researchers even during their student years. If you are unable to pay the full tuition upfront, you may request to pay in two or three installments.

All classes are recorded, and the video recordings of each week’s classes will be shared with all course participants early next week. This way, even if you are unable to attend the live session online, you can catch up by watching the class video. However, we do recommend attending the class online.

سبد خرید