Advanced Mathematics

Partial Differential Equations

Partial differential equations govern fluid flow, heat conduction, wave propagation, quantum mechanics, and virtually every continuous physical process involving multiple independent variables. This guide develops the classification theory, classical solution techniques, modern functional-analytic foundations, and computational methods that form the backbone of mathematical physics and engineering analysis.

1. Classification of Second-Order PDEs

A partial differential equation involves a function of two or more independent variables together with its partial derivatives. The most important class for applications is the second-order linear PDE in two variables, which takes the general form:

A u_xx + B u_xy + C u_yy + D u_x + E u_y + F u = G

where A, B, C, D, E, F, G may depend on x and y but not on u or its derivatives (the linear case).

The discriminant B squared minus 4AC classifies the PDE at each point of the domain:

Elliptic PDEs

B squared minus 4AC is less than 0

Elliptic equations describe steady-state or equilibrium phenomena. There is no preferred direction; the equation looks the same from all sides. Solutions are infinitely smooth inside the domain (real-analytic if coefficients are analytic) and are completely determined by boundary data alone. The prototype is the Laplace equation delta u = u_xx + u_yy = 0 and the Poisson equation delta u = f. Elliptic problems require boundary conditions on the entire boundary of the domain and no initial data on a time-like slice.

Parabolic PDEs

B squared minus 4AC equals 0

Parabolic equations describe diffusion and smoothing processes. One variable plays the role of time and the equation propagates data forward in that direction. Solutions are immediately smooth in the spatial variables for positive time even if initial data is rough, because diffusion instantly spreads information everywhere (infinite speed of propagation). The prototype is the heat equation u_t = k u_xx. The appropriate data is an initial condition at t = 0 plus boundary conditions on the spatial boundary for all positive time.

Hyperbolic PDEs

B squared minus 4AC is greater than 0

Hyperbolic equations describe wave propagation with finite speed. Information travels along characteristic curves, and discontinuities in initial data persist in the solution rather than being smoothed out. The prototype is the wave equation u_tt = c squared u_xx. The appropriate data is two initial conditions at t = 0 (displacement and velocity) plus, if the spatial domain is bounded, boundary conditions. Characteristics are the curves along which disturbances travel and along which the PDE reduces to an ODE.

Key Examples by Type

  • Elliptic: Laplace equation (electrostatics, steady heat), Helmholtz equation (acoustic resonance), minimal surface equation
  • Parabolic: Heat equation (thermal diffusion), Black-Scholes equation (option pricing), Fokker-Planck equation (probability density evolution)
  • Hyperbolic: Wave equation (acoustics, electromagnetism), Klein-Gordon equation (relativistic quantum field), equations of linear elasticity

2. The Heat Equation

The heat equation u_t = k u_xx (or u_t = k delta u in higher dimensions) models the diffusion of heat, concentration, or any quantity spreading by random motion. The constant k is the thermal diffusivity (or diffusion coefficient), with units of length squared per time.

Separation of Variables and Fourier Series

On a finite interval [0, L] with homogeneous Dirichlet boundary conditions u(0,t) = u(L,t) = 0 and initial data u(x,0) = f(x), assume the solution factors as u(x,t) = X(x) T(t). Substituting into the heat equation:

X(x) T'(t) = k X''(x) T(t)

T'(t) / (k T(t)) = X''(x) / X(x) = -lambda

Since the left side depends only on t and the right side only on x, both must equal the same constant, written as -lambda.

The spatial ODE X'' + lambda X = 0 with X(0) = X(L) = 0 is a Sturm-Liouville eigenvalue problem. The only non-trivial solutions occur for eigenvalues lambda_n = (n pi / L) squared with eigenfunctions X_n(x) = sin(n pi x / L) for n = 1, 2, 3, and so on. The time equation T_n' = -k lambda_n T_n gives T_n(t) = exp(-k (n pi / L) squared t). By superposition:

u(x,t) = sum over n = 1 to infinity: b_n sin(n pi x / L) exp(-k (n pi / L)^2 t)

b_n = (2/L) integral from 0 to L of f(x) sin(n pi x / L) dx

Each mode decays exponentially with a rate proportional to n squared, so high-frequency spatial oscillations damp out far faster than low-frequency ones. This explains why diffusion smooths sharp features: the rapid modes vanish quickly, leaving only the slowly decaying fundamental mode at large times.

Maximum Principle

The maximum principle for the heat equation states that on a bounded space-time cylinder (0 less than x less than L, 0 less than t less than T), the maximum and minimum of the solution are attained on the parabolic boundary: either on the initial line t = 0 or on the spatial boundary x = 0 or x = L. Interior maxima cannot occur (unless u is constant). This gives a powerful uniqueness theorem: if two solutions agree on the parabolic boundary, they agree everywhere. It also gives comparison principles: if f is non-negative, then u stays non-negative for all time.

Fundamental Solution (Heat Kernel)

On all of R^n with initial data f, the heat equation is solved by convolution with the heat kernel:

Phi(x,t) = (4 pi k t)^(-n/2) exp(-|x|^2 / (4kt)), t greater than 0

u(x,t) = integral over R^n of Phi(x-y, t) f(y) dy

Phi(x,t) starts as a delta function at t = 0 and spreads into a Gaussian bell curve of width proportional to sqrt(kt).

The heat kernel formula shows the instant regularization: no matter how rough f is (any integrable function), u(x,t) is smooth for any t greater than 0. It also reveals the infinite speed of propagation: for any t greater than 0, the temperature at any point x is already influenced by initial temperatures arbitrarily far away, though the influence decays rapidly (Gaussian decay) with distance.

3. The Wave Equation

The wave equation u_tt = c squared u_xx describes vibrating strings, acoustic pressure waves, electromagnetic waves, and seismic waves. The constant c is the wave speed. In n dimensions: u_tt = c squared delta u where delta denotes the Laplacian in the spatial variables.

D'Alembert's Formula

On the real line with initial displacement u(x,0) = f(x) and initial velocity u_t(x,0) = g(x), the unique solution is given by d'Alembert's formula:

u(x,t) = (1/2)[f(x+ct) + f(x-ct)] + (1/(2c)) integral from x-ct to x+ct of g(s) ds

The formula decomposes u into a right-traveling wave f(x - ct) and a left-traveling wave f(x + ct), each preserving the shape of the initial displacement exactly. Unlike the heat equation, there is no smoothing: a discontinuity in f propagates unchanged along the characteristic lines x minus ct = constant and x plus ct = constant. The solution at (x, t) depends only on initial data in the interval [x - ct, x + ct], the domain of dependence, illustrating finite speed of propagation.

Characteristics and the Factored Wave Operator

Changing variables to xi = x + ct and eta = x - ct transforms the wave equation into u_xi_eta = 0, which integrates trivially to give u = F(xi) + G(eta) = F(x+ct) + G(x-ct). The characteristic lines xi = constant and eta = constant are the curves along which disturbances travel. The characteristic parallelogram gives a simple geometric interpretation: the value u at any point is determined by the values on any two intersecting characteristics passing through that point. Characteristics also reveal why hyperbolic equations can have discontinuous solutions: a discontinuity in initial data simply propagates along the characteristic through that point.

Energy Conservation

The wave equation conserves energy. Define the total energy as the sum of kinetic and potential energy:

E(t) = (1/2) integral of [ (u_t)^2 + c^2 (u_x)^2 ] dx

Differentiating under the integral and integrating by parts (using the wave equation) gives dE/dt = 0, confirming conservation.

Energy conservation provides a uniqueness proof: if u and v are both solutions with the same initial data, then w = u - v solves the wave equation with zero initial data, so E_w(t) = E_w(0) = 0, forcing w = 0. In bounded domains, conservation holds provided the boundary conditions do no work (for example, homogeneous Dirichlet or Neumann conditions).

Separation of Variables for the Wave Equation

On [0, L] with u(0,t) = u(L,t) = 0, separating variables gives the same spatial eigenfunctions sin(n pi x / L) but now the time equation is T_n'' = -c squared (n pi / L) squared T_n, yielding T_n(t) = A_n cos(omega_n t) + B_n sin(omega_n t) where omega_n = c n pi / L are the natural frequencies. The solution is:

u(x,t) = sum over n of sin(n pi x / L) [A_n cos(omega_n t) + B_n sin(omega_n t)]

The fundamental frequency omega_1 = c pi / L corresponds to the first harmonic. All higher modes are integer multiples of this fundamental, which is why a plucked string produces a musical note with overtones.

4. Laplace and Poisson Equations

The Laplace equation delta u = 0 and the Poisson equation delta u = f are the fundamental elliptic PDEs. They arise in electrostatics (u is electric potential, f is charge density), steady-state heat conduction (u is temperature, f is heat source), gravitational potential theory, and complex analysis (real and imaginary parts of analytic functions are harmonic).

Harmonic Functions

A function satisfying delta u = 0 is called harmonic. Harmonic functions have remarkable regularity: they are real-analytic (equal to their own Taylor series) at every interior point. This follows from the integral representation via Green's functions. Equivalently, harmonic functions are precisely the functions that locally minimize the Dirichlet energy integral of |grad u| squared among all functions with the same boundary values.

Mean Value Property

The mean value property is both a consequence and a characterization of harmonicity:

u(x_0) = (1 / volume of B_r) integral over B_r(x_0) of u dV

u(x_0) = (1 / area of S_r) integral over S_r(x_0) of u dS

The value at x_0 equals the volume average over any ball centered at x_0, and also the surface average over the bounding sphere.

The mean value property immediately implies the maximum principle: if u achieves its maximum at an interior point x_0, then u is constant in a neighborhood of x_0 (since a maximum cannot equal a strict average over a ball unless u is constant there), and by connectivity u is constant throughout the domain. This rules out interior maxima for non-constant harmonic functions.

Dirichlet and Neumann Boundary Value Problems

The Dirichlet problem asks for a harmonic function on Omega that equals prescribed data g on the boundary: delta u = 0 in Omega, u = g on the boundary of Omega. This models the temperature in a body with prescribed surface temperatures. By the maximum principle, the solution is unique. The Neumann problem instead prescribes the normal derivative: delta u = 0 in Omega, partial u / partial n = h on the boundary. The Neumann problem has a solution only if the integral of h over the boundary is zero (the net flux condition), and the solution is unique only up to an additive constant. The mixed (Robin) problem prescribes a linear combination of u and its normal derivative.

Green's Functions

Green's function G(x,y) for the Laplacian on Omega encodes the full solution operator. It satisfies:

-delta_x G(x,y) = delta(x-y) in Omega

G(x,y) = 0 for x on the boundary of Omega

The fundamental solution of the Laplacian is Phi(x,y) = -(1/(2 pi)) log|x-y| in R^2 and Phi(x,y) = 1 / (4 pi |x-y|) in R^3. The Green function for a general domain is G(x,y) = Phi(x,y) + h(x,y) where h is a corrector harmonic function chosen so G vanishes on the boundary. For a ball in R^3 of radius R, the correction is obtained by the method of images: place a negative image charge at the inversion point y* = R squared y / |y| squared, with magnitude scaled to cancel the boundary values of Phi.

Once G is known, the solution to the Poisson equation -delta u = f with u = g on the boundary is:

u(x) = integral over Omega of G(x,y) f(y) dy minus integral over boundary of G_n(x,y) g(y) dS(y)

The term -partial G / partial n_y is the Poisson kernel, which gives the harmonic extension of boundary data. For the upper half-space in R^n, the Poisson kernel is an explicit formula resembling a multidimensional Cauchy integral, and convolution with it solves the Dirichlet problem with L^p boundary data.

5. Method of Characteristics for First-Order PDEs

First-order PDEs of the form a(x,y,u) u_x + b(x,y,u) u_y = c(x,y,u) are solved by the method of characteristics, which converts the PDE into a system of ODEs along special curves in the (x,y) plane.

The Characteristic ODEs

Introduce a parameter s along each characteristic curve. The characteristic equations are:

dx/ds = a(x, y, u)

dy/ds = b(x, y, u)

du/ds = c(x, y, u)

Along each characteristic, u evolves according to the third ODE. Given initial data u(x_0(r), y_0(r)) = u_0(r) on a parametrized initial curve, one solves the characteristic system starting from each point of the initial curve, tracing out the solution surface in (x, y, u) space. The solution exists and is unique as long as the characteristics do not cross and the initial curve is not itself a characteristic (the non-characteristic condition).

Transport Equation Example

The simplest example is the constant-coefficient transport equation u_t + c u_x = 0 with initial data u(x,0) = f(x). The characteristic equations are dt/ds = 1, dx/ds = c, du/ds = 0, giving characteristics x - ct = constant along which u is constant. Therefore u(x,t) = f(x - ct): the initial profile is simply advected to the right at speed c without any change in shape.

Nonlinear First-Order PDEs and Shocks

For nonlinear equations like the inviscid Burgers equation u_t + u u_x = 0, the characteristic speed depends on u itself: dx/dt = u, du/dt = 0. Characteristics are straight lines with slope depending on the initial data. When characteristics carry different values toward a common point, they eventually cross, and the solution becomes multi-valued at finite time. This is the formation of a shock: a moving discontinuity in the solution. The Rankine-Hugoniot condition determines the shock speed, and an entropy condition selects the physically admissible shock from the various weak solutions satisfying the Rankine-Hugoniot condition.

6. Sobolev Spaces and Weak Formulations

Classical PDE theory requires solutions that are sufficiently differentiable. For irregular data or domains, classical solutions may not exist. Modern PDE theory handles this through weak (distributional) formulations in Sobolev spaces, which are function spaces built for PDE analysis.

Sobolev Spaces W^k,p

The Sobolev space W^k,p(Omega) consists of all functions in L^p(Omega) whose distributional derivatives of order up to k also lie in L^p. The norm is the sum of L^p norms of the function and all its derivatives up to order k. The Hilbert space case p = 2 is written H^k(Omega). Key spaces include:

  • H^0 = L^2: square-integrable functions
  • H^1: functions with square-integrable gradient, the natural space for elliptic problems
  • H^1_0: H^1 functions with zero boundary trace, used for Dirichlet problems
  • H^-1: the dual space of H^1_0, containing the right-hand sides f allowed in weak formulations

Sobolev Embedding Theorems

Sobolev embedding theorems relate membership in W^k,p to pointwise regularity. The key result: if k minus n/p is greater than m, then W^k,p embeds into C^m (m-times continuously differentiable functions). In particular, in dimension n = 3, functions in H^2 are continuous. Trace theorems define the boundary values (trace) of Sobolev functions, giving a rigorous meaning to the restriction of an H^1 function to the boundary as an element of H^(1/2) of the boundary.

Weak Formulation and the Lax-Milgram Theorem

For the Dirichlet problem -delta u = f on Omega with u = 0 on the boundary, multiply by a test function v in H^1_0 and integrate by parts to obtain the weak formulation:

integral of grad u dot grad v dx = integral of f v dx, for all v in H^1_0

A weak solution is a function u in H^1_0 satisfying this identity. The Lax-Milgram theorem applies: the bilinear form a(u,v) = integral of grad u dot grad v dx is continuous and coercive on H^1_0 (by the Poincare inequality), so for any bounded linear functional F(v) = integral of f v dx with f in H^-1, there exists a unique weak solution. This approach extends seamlessly to more general elliptic operators, variable-coefficient problems, and mixed boundary conditions.

Elliptic Regularity

Elliptic regularity theorems show that weak solutions are as smooth as the data allows. If f is in H^m(Omega) and the boundary is smooth, then the weak solution u is in H^(m+2)(Omega). In particular, if f is in C^infinity then u is in C^infinity. This bootstrapping argument shows that weak solutions of elliptic equations are in fact classical solutions when data is smooth: the weak and classical theories agree.

7. Finite Element Method

The finite element method (FEM) is the dominant numerical method for elliptic and parabolic PDEs on complex geometries. It discretizes the weak formulation by restricting to a finite-dimensional subspace V_h of the Sobolev space, leading to a linear algebra problem that is well-suited to computation.

Galerkin Discretization

Divide the domain Omega into a mesh of triangles (in 2D) or tetrahedra (in 3D). On each element, approximate u by a polynomial (typically piecewise linear or quadratic). The basis functions phi_1 through phi_N are chosen to be hat functions: each phi_i equals 1 at node i and 0 at all other nodes. The approximate solution is u_h = sum of u_j phi_j. The Galerkin condition requires the weak formulation to hold for all basis functions:

sum over j: u_j [integral of grad phi_j dot grad phi_i dx] = integral of f phi_i dx

K u = F

K is the stiffness matrix with K_ij = integral of grad phi_j dot grad phi_i dx. F is the load vector with F_i = integral of f phi_i dx.

The stiffness matrix K is symmetric positive definite and sparse (because basis functions overlap only with neighboring nodes on the mesh), allowing efficient solution by direct sparse factorization or iterative methods like conjugate gradients. Cea's lemma guarantees that the FEM solution u_h is the best approximation to u in the energy norm within the finite element space: the error is controlled by how well the exact solution can be approximated by piecewise polynomials on the mesh.

Error Estimates and Convergence

For piecewise linear elements on a quasi-uniform mesh with element diameter h, the H^1 error satisfies the bound ||u - u_h||_H^1 is at most C h ||u||_H^2, and the L^2 error satisfies ||u - u_h||_L^2 is at most C h squared ||u||_H^2. Higher-order polynomial elements give higher convergence rates. Adaptive FEM refines the mesh where the error is large (guided by a posteriori error estimators) to achieve a desired accuracy with minimal degrees of freedom.

8. Navier-Stokes Equations Overview

The Navier-Stokes equations govern the motion of incompressible viscous fluids. They combine Newton's second law with the incompressibility constraint:

rho (u_t + (u dot grad) u) = -grad p + mu delta u + f

div u = 0

u is the velocity vector field, p is pressure, rho is density, mu is dynamic viscosity, and f is external body force.

The left side of the momentum equation is the material derivative (acceleration following a fluid particle), which includes the nonlinear convective term (u dot grad) u. The right side contains the pressure gradient (which drives flow), the viscous dissipation term mu delta u, and body forces. The incompressibility condition div u = 0 enforces conservation of mass for constant-density fluids and couples the velocity and pressure.

The mathematical theory of Navier-Stokes is among the deepest open problems in mathematics. In two dimensions, global smooth solutions are known to exist for all time with smooth initial data. In three dimensions, the global regularity problem (a Clay Millennium Problem) remains unsolved: it is unknown whether solutions can develop singularities (blow-up) in finite time. Weak solutions (Leray-Hopf solutions in L^2) exist globally in 3D but their uniqueness is unknown. The Reynolds number Re = rho U L / mu characterizes the flow regime: low Re gives laminar flow well-described by linearized Stokes equations, while large Re gives turbulence with a cascade of energy across spatial scales described by Kolmogorov's theory.

9. The Schrodinger Equation

The time-dependent Schrodinger equation governs the evolution of the quantum mechanical wave function psi(x,t):

i hbar psi_t = -(hbar^2 / 2m) delta psi + V(x) psi

where hbar is the reduced Planck constant, m is the particle mass, and V(x) is the potential energy. This is a complex-valued linear PDE that is parabolic in structure (first order in time, second order in space) but with an imaginary coefficient, giving it dispersive (wave-like) rather than dissipative (diffusive) character.

The stationary states are solutions of the form psi(x,t) = phi(x) exp(-i E t / hbar), where phi satisfies the time-independent Schrodinger equation (an eigenvalue problem):

-(hbar^2 / 2m) delta phi + V(x) phi = E phi

The eigenvalues E_n are the allowed energy levels and the eigenfunctions phi_n are the stationary state wave functions. The general solution is a superposition of stationary states. The mathematical analysis involves the spectral theory of the Schrodinger operator H = -(hbar^2 / 2m) delta + V on L^2(R^n). For confining potentials (V goes to infinity at infinity), the spectrum is discrete; for scattering problems, the spectrum is continuous. Scattering theory connects the large-time behavior of solutions to asymptotic wave behavior described by the S-matrix.

10. Conservation Laws and Shock Waves

A scalar conservation law in one space dimension has the form u_t + f(u)_x = 0, expressing that the quantity u is conserved: the rate of change of u in any interval equals the net flux through the endpoints. The flux function f(u) determines the wave speed f'(u) at which disturbances travel. When f is nonlinear, different parts of the solution travel at different speeds, leading to wave steepening and eventual shock formation.

Weak Solutions and the Rankine-Hugoniot Condition

Classical solutions of conservation laws break down at shock formation. Weak solutions are defined by the integral form of the conservation law: for any smooth test function phi with compact support,

double integral [u phi_t + f(u) phi_x] dx dt + integral u_0(x) phi(x,0) dx = 0

Across a shock curve x = s(t), the Rankine-Hugoniot condition must hold: the shock speed ds/dt equals [f(u_R) - f(u_L)] / [u_R - u_L], the ratio of the jump in flux to the jump in the conserved quantity.

Entropy Conditions

The conservation law may have multiple weak solutions satisfying the Rankine-Hugoniot condition; the physical one is selected by an entropy condition. The Lax entropy condition for a convex flux f requires f'(u_L) greater than s greater than f'(u_R): characteristics from both sides impinge on the shock (the shock is compressive). Expansion shocks (where characteristics spread away from the discontinuity) are not physical and are excluded. The Oleinik entropy condition provides a more general criterion. Kruzhkov's theorem establishes global existence and uniqueness of entropy solutions for L^infinity initial data, where the entropy solution is the unique limit of vanishing viscosity approximations u_epsilon solving u_t + f(u)_x = epsilon u_xx as epsilon approaches zero.

Systems of Conservation Laws

The Euler equations of gas dynamics and the equations of nonlinear elasticity are systems of conservation laws: U_t + F(U)_x = 0 where U is a vector of conserved quantities and F is the flux vector. The Jacobian DF(U) has eigenvalues (wave speeds) and right eigenvectors (wave families). A system is hyperbolic if DF has n real eigenvalues and n linearly independent right eigenvectors. Riemann problems (initial data consisting of a single jump from U_L to U_R) are solved by a combination of shocks, rarefaction waves, and contact discontinuities from the different wave families. The Godunov method for numerical computation uses exact or approximate Riemann solvers at each cell interface.

Frequently Asked Questions

How are second-order PDEs classified, and why does it matter?

A second-order linear PDE in two variables A u_xx + B u_xy + C u_yy plus lower-order terms equals zero is classified by the discriminant D = B squared minus 4AC. When D is negative the equation is elliptic, when D equals zero it is parabolic, and when D is positive it is hyperbolic. The classification is essential because it determines the appropriate data (what initial and boundary conditions make the problem well-posed), the qualitative behavior of solutions (smoothing vs. wave propagation vs. steady equilibria), and which numerical methods converge. An elliptic solver applied to a hyperbolic problem will produce nonsense; the correct method must match the equation type.

Walk through solving the heat equation by separation of variables.

Consider u_t = k u_xx on [0,L] with u(0,t) = u(L,t) = 0 and u(x,0) = f(x). Assume u(x,t) = X(x) T(t) and substitute: the PDE becomes X T' = k X'' T, which rearranges to T'/(kT) = X''/X = -lambda. The left side is a function of t alone and the right of x alone, so both equal the separation constant -lambda. The eigenvalue problem X'' + lambda X = 0, X(0) = X(L) = 0 has solutions X_n = sin(n pi x / L) for lambda_n = (n pi / L) squared. Each T_n decays as exp(-k lambda_n t). The full solution is a Fourier sine series with coefficients b_n = (2/L) times the integral of f(x) sin(n pi x / L) from 0 to L. The solution is the sum of these modes, each decaying at its own exponential rate, with high-frequency modes dying out first.

What does d'Alembert's formula tell us about the wave equation?

D'Alembert's formula u(x,t) = (1/2)[f(x+ct) + f(x-ct)] plus (1/(2c)) times the integral of g from x-ct to x+ct shows that the solution splits into a right-traveling copy of the initial displacement and a left-traveling copy, each moving at speed c with the shape completely preserved. There is no smoothing: a sharp edge in f stays a sharp edge forever. The integral term of g represents the velocity contribution: each initial velocity gets integrated into displacement over time. The domain of dependence of any point (x,t) is the interval [x-ct, x+ct] on the initial line, proving that waves travel at finite speed c and that no information from outside this interval has reached (x,t) by time t.

What is the maximum principle and what are its main consequences?

The strong maximum principle for harmonic functions states that a non-constant harmonic function on a connected domain cannot attain its maximum or minimum at an interior point: these extrema occur only on the boundary. This follows from the mean value property: if u(x_0) were a maximum and u(x_0) equaled the average of u over a ball, then u would equal u(x_0) throughout the ball, and by connectivity throughout the domain. Consequences include: (1) uniqueness of the Dirichlet problem, since two solutions with the same boundary data have the same difference whose maximum and minimum are both zero; (2) comparison principle, if u is at most v on the boundary then u is at most v everywhere; and (3) a harmonic function is bounded by the maximum of its boundary data. The maximum principle also holds for the heat equation (on the parabolic boundary) and for more general elliptic operators.

How does the method of characteristics work for a first-order PDE?

For a(x,y,u) u_x + b(x,y,u) u_y = c(x,y,u), a characteristic is a curve in the (x,y) plane along which the PDE reduces to an ODE. Parameterize the curve by s: then dx/ds = a, dy/ds = b, du/ds = c. Given initial data on a curve gamma in the (x,y) plane, solve this ODE system starting from each point of gamma, tracing out a characteristic. The values of u along each characteristic are determined by the third ODE. The solution surface in (x, y, u) space is the union of all these characteristic curves. The method succeeds as long as characteristics from different points of gamma do not cross. When they do, a multi-valued surface forms: this is the beginning of shock formation, and the classical solution breaks down at the first crossing time.

What is Green's function and how is it used to solve the Poisson equation?

Green's function G(x,y) for the Laplacian on a domain Omega with Dirichlet boundary conditions is the solution to -delta_x G(x,y) = delta(x-y), G = 0 on the boundary. It represents the potential at x from a unit point source at y. Once G is known, the Poisson equation -delta u = f with u = 0 on the boundary has solution u(x) = the integral of G(x,y) f(y) over Omega. For a domain with boundary data g and f = 0, the solution involves the Poisson kernel: u(x) = the negative integral of the normal derivative of G times g over the boundary. For a ball in R^3, the explicit Poisson formula is obtained by the method of images. Green's functions encapsulate all the information about the operator and domain.

What are Sobolev spaces and why do PDEs need them?

Sobolev spaces H^k(Omega) consist of functions whose distributional derivatives up to order k are square-integrable. They are the right setting for PDEs because: (1) classical solutions require pointwise differentiability that may not exist for rough data or non-smooth domains, (2) the weak (variational) formulation of an elliptic PDE naturally lives in H^1, (3) the Lax-Milgram theorem in Hilbert spaces guarantees existence and uniqueness of weak solutions, and (4) elliptic regularity then shows that these weak solutions are smooth when data is smooth, recovering classical solutions as a special case. Without Sobolev spaces, we would have no general existence and uniqueness theory, only methods that work case by case.

How does the finite element method discretize a PDE?

FEM starts from the weak formulation: find u in H^1_0 such that the integral of grad u dot grad v equals the integral of f v for all v in H^1_0. It replaces H^1_0 with a finite-dimensional subspace V_h of piecewise polynomial functions on a mesh. Choosing a basis of hat functions phi_1 through phi_N (each equal to 1 at one node and 0 elsewhere), the Galerkin condition gives a linear system K u = F where K_ij equals the integral of grad phi_j dot grad phi_i. This stiffness matrix is sparse (nonzero only for neighboring nodes), symmetric positive definite, and well-suited to conjugate gradient iteration. The FEM solution u_h is the best approximation to u in the energy norm within V_h by Cea's lemma, and the error decreases as h to the power k for degree-k elements as the mesh is refined.

What is the Rankine-Hugoniot condition and why is it not enough to determine shock solutions?

For a conservation law u_t + f(u)_x = 0, the Rankine-Hugoniot condition states that across a shock moving at speed s, the jump in flux equals s times the jump in u: f(u_R) minus f(u_L) equals s times (u_R minus u_L). This is derived by integrating the conservation law in weak form across the shock curve. However, a given pair of states (u_L, u_R) can be connected by either a shock or a rarefaction wave, and the Rankine-Hugoniot condition is satisfied by both. The entropy condition selects the physical solution: for a convex flux, the Lax entropy condition requires that characteristics from both sides impinge on the shock, meaning f'(u_L) is greater than s and s is greater than f'(u_R). Equivalently, the viscosity criterion says the physical weak solution is the limit of solutions to the viscous equation u_t + f(u)_x = epsilon u_xx as viscosity epsilon approaches zero.

What is the relationship between the Schrodinger equation and the heat equation?

The Schrodinger equation i hbar psi_t = H psi and the heat equation u_t = k delta u are related by the substitution t goes to it (imaginary time), which transforms one into the other. Both are first-order in time and second-order in space, but the factor of i makes the Schrodinger equation dispersive rather than dissipative. Heat equation solutions decay in amplitude and smooth out (dissipation), while Schrodinger solutions preserve the L^2 norm (unitarity of the time evolution) and exhibit wave-packet spreading (dispersion). The heat kernel analytically continues to the free-particle Schrodinger propagator. Both equations are amenable to Fourier transform methods: Fourier modes are eigenfunctions of the spatial operator in both cases, but in heat flow the amplitude decays while in quantum mechanics it oscillates.

Related Topics

NailTheTest

Ready to master partial differential equations?

NailTheTest provides structured study guides, practice problems, and exam preparation for advanced mathematics courses at the undergraduate and graduate level. Start building the problem-solving instincts that turn PDE theory into usable skill.