Control Theory
A comprehensive mathematical reference covering open-loop and closed-loop systems, transfer functions, stability analysis, PID control, state-space methods, optimal control, and discrete-time design — from classical frequency-domain tools to modern state-space techniques.
1. Fundamentals of Control Systems
Open-Loop Control
An open-loop control system applies a control action computed purely from the reference input without measuring the actual output. The controller C acts on the reference r to produce a command u, which drives the plant P to produce output y. Because there is no feedback path, any disturbance d or plant modeling error propagates directly to the output uncorrected.
r(t) ──► [ Controller C ] ──► u(t) ──► [ Plant P ] ──► y(t)
↑
Disturbance d(t)Open-loop control is adequate when the plant model is accurate, disturbances are negligible, and precise output tracking is not required. Examples include simple timer-driven appliances and some stepper motor drives where position is inferred from step count rather than measured.
Closed-Loop (Feedback) Control
Closed-loop control measures the actual output y(t) and computes the error signal e(t) = r(t) - y(t). The controller processes this error to produce a corrective control action u(t). The feedback path makes the system self-correcting: disturbances and plant variations are automatically attenuated.
r(t) ──►(+)──► [ Controller C(s) ] ──► u(t) ──► [ Plant G(s) ] ──► y(t)
↑(-) ◄──────────────────────────────────────────────────┘
└─── feedback path H(s) (unity: H = 1)Advantages of closed-loop control include rejection of disturbances, reduced sensitivity to plant parameter variations, ability to track time-varying references, and the capacity to stabilize open-loop unstable plants. The trade-off is the possibility of instability if the feedback loop introduces excessive gain or phase shift.
Performance Specifications
Control system performance is characterized by both transient and steady-state specifications:
- Rise time t_r — time to go from 10% to 90% of the final value
- Settling time t_s — time for the response to remain within 2% (or 5%) of the final value
- Percent overshoot (PO) — (peak - final) / final * 100%; related to damping ratio by PO = exp(-pi*zeta / sqrt(1 - zeta^2)) * 100%
- Steady-state error e_ss — the residual error after transients die out; determined by system type and the final value theorem
- Bandwidth omega_BW — frequency at which the closed-loop magnitude drops to 1/sqrt(2) of its DC value (-3 dB)
2. Transfer Functions and Laplace Analysis
The Laplace transform converts a time-domain ODE into an algebraic equation in the complex frequency variable s = sigma + j*omega. For a causal LTI system with zero initial conditions, the transfer function is defined as:
Key Laplace Transform Pairs
| Time Domain f(t) | Laplace Domain F(s) |
|---|---|
| delta(t) — impulse | 1 |
| u(t) — unit step | 1/s |
| t * u(t) — ramp | 1/s^2 |
| e^(at) * u(t) | 1/(s - a) |
| sin(omega*t) * u(t) | omega / (s^2 + omega^2) |
| cos(omega*t) * u(t) | s / (s^2 + omega^2) |
| df/dt | s*F(s) - f(0^-) |
| integral of f(tau) dtau | F(s)/s |
Standard Second-Order System
The canonical second-order transfer function is the cornerstone of classical control analysis because most physical systems can be approximated as second-order near their dominant poles:
Response character depends on damping ratio:
- zeta > 1 — overdamped: two distinct real negative poles, no oscillation
- zeta = 1 — critically damped: repeated real pole, fastest non-oscillatory response
- 0 < zeta < 1 — underdamped: complex conjugate poles, oscillatory decay; peak overshoot occurs
- zeta = 0 — undamped: purely imaginary poles, sustained oscillation
- zeta < 0 — unstable: poles in right half-plane, growing oscillation
Final Value Theorem and Steady-State Error
The steady-state error for unity feedback is e_ss = lim(s→0) s * E(s) = lim(s→0) s * R(s) / (1 + G(s)). System type (number of pure integrators in the open-loop) determines which reference signals can be tracked with zero steady-state error: a Type 0 system tracks step inputs with finite error, a Type 1 tracks steps perfectly but has finite ramp error, and a Type 2 tracks ramps perfectly.
3. Poles, Zeros, and System Response
For G(s) = K * product(s - z_i) / product(s - p_j), the zeros z_i are the values of s where the output is forced to zero regardless of the input; the poles p_j are the values of s where the transfer function becomes unbounded. Poles and zeros are plotted in the s-plane (complex plane) using the convention of crosses (x) for poles and circles (o) for zeros.
Im(s)
↑
| × (unstable pole, RHP)
─────── | ──────── Re(s)
○ (zero) | (stable region is LHP)
× (stable pole, LHP)
|- Pole in LHP (Re(s) < 0): decaying exponential mode — stable
- Pole on imaginary axis (Re(s) = 0): oscillatory, marginally stable
- Pole in RHP (Re(s) > 0): growing mode — unstable
- Real part of pole = -sigma: time constant tau = 1/sigma
- Imaginary part of pole = omega_d: damped natural frequency
Partial Fraction Decomposition
The inverse Laplace transform is computed by decomposing G(s) into partial fractions, each corresponding to a simple pole mode. For a system with distinct poles p_1, ..., p_n:
Dominant poles are those closest to the imaginary axis; they decay slowest and govern the primary transient behavior. Well-separated poles far to the left decay quickly and can often be neglected in approximate analysis, justifying second-order approximations for higher-order systems.
Effect of Additional Zeros
Adding a zero near the imaginary axis generally increases overshoot in the step response because the zero differentiates the input, producing an additional leading term. A right half-plane zero (non-minimum phase zero) causes the system output to initially move in the wrong direction before correcting — this is the characteristic of non-minimum phase systems such as some aircraft and ship control problems and makes control more challenging.
4. Stability — BIBO and Lyapunov
BIBO Stability
A system is Bounded-Input Bounded-Output (BIBO) stable if every bounded input produces a bounded output. For an LTI system with impulse response h(t), BIBO stability requires:
A pole on the imaginary axis (e.g., a pure integrator at s = 0) yields a marginally stable system that is not BIBO stable since a sinusoidal input at the resonant frequency produces a growing output. A simple integrator with a bounded step input produces an unbounded ramp output.
Lyapunov Stability
Lyapunov's direct method analyzes stability without solving differential equations. For the autonomous system x-dot = f(x), find a Lyapunov function V(x) — a scalar function analogous to energy — satisfying:
- 1. V(0) = 0 and V(x) > 0 for all x ≠ 0 (positive definite)
- 2. V(x) → ∞ as ||x|| → ∞ (radially unbounded)
- 3. dV/dt = (∇V) · f(x) ≤ 0 along trajectories (negative semi-definite)
- If dV/dt < 0 strictly, the equilibrium is globally asymptotically stable
For linear systems x-dot = A*x, the equilibrium at the origin is asymptotically stable if and only if for every positive definite matrix Q, the Lyapunov equation A^T * P + P * A = -Q has a unique positive definite solution P. The quadratic form V(x) = x^T * P * x then serves as the Lyapunov function, and the condition reduces to all eigenvalues of A having negative real parts.
Input-to-State Stability (ISS)
Input-to-State Stability extends Lyapunov stability to systems with external inputs: a system x-dot = f(x, u) is ISS if there exist a class-KL function beta and a class-K function gamma such that ||x(t)|| ≤ beta(||x(0)||, t) + gamma(sup||u||). This guarantees that bounded inputs produce bounded states and that the unforced system is asymptotically stable — making ISS the nonlinear generalization of BIBO stability.
5. Routh-Hurwitz Criterion
The Routh-Hurwitz criterion determines the number of closed-loop poles in the right half-plane from the characteristic polynomial coefficients, without computing the roots directly. Given the polynomial:
Constructing the Routh Array
Row s^4: a_4 a_2 a_0 Row s^3: a_3 a_1 0 Row s^2: b_1 b_2 0 Row s^1: c_1 0 0 Row s^0: d_1 0 0 where: b_1 = (a_3*a_2 - a_4*a_1) / a_3 b_2 = (a_3*a_0 - a_4*0) / a_3 = a_0 c_1 = (b_1*a_1 - a_3*b_2) / b_1 d_1 = b_2 (last nonzero entry)
- Necessary condition: all coefficients a_i must be positive (or all negative)
- Number of sign changes in the first column = number of RHP poles
- System is stable iff all first-column entries are positive
- If a first-column entry is zero (but the row is not all zeros): replace with a small epsilon > 0 and take the limit
- If an entire row is zero: use the auxiliary polynomial (derivative of row above) to fill the row; roots of auxiliary polynomial are symmetric about the origin
Worked Example
Determine stability of P(s) = s^3 + 6*s^2 + 11*s + 6.
Row s^3: 1 11 Row s^2: 6 6 Row s^1: b_1 0 where b_1 = (6*11 - 1*6)/6 = (66-6)/6 = 10 Row s^0: c_1 where c_1 = (10*6 - 6*0)/10 = 6 First column: 1, 6, 10, 6 — all positive, zero sign changes. Conclusion: all 3 poles in LHP, system is stable. (Poles are s = -1, -2, -3 — verifiable by factoring.)
6. Root Locus Method
The root locus is the set of all locations in the s-plane where the closed-loop poles can lie as the gain parameter K varies from 0 to +∞. For a unity feedback system with open-loop transfer function K*G(s), the closed-loop characteristic equation is 1 + K*G(s) = 0, or equivalently K*G(s) = -1.
Construction Rules (Evans Rules)
- Number of branches: equal to n = max(poles, zeros); n branches total
- Starting points (K=0): branches begin at open-loop poles
- Ending points (K→∞): branches end at open-loop zeros (finite zeros) or go to infinity along asymptotes
- Real axis segments: a segment of the real axis belongs to the locus if the total number of poles and zeros to its right is odd
- Asymptote angles: theta_k = (2k+1)*180° / (n - m), for k = 0, 1, ..., (n-m-1), where n = number of poles, m = number of zeros
- Asymptote centroid: sigma_A = (sum of poles - sum of zeros) / (n - m)
- Breakaway/break-in points: solutions of dK/ds = 0 that lie on the root locus
- Imaginary-axis crossings: determined by Routh-Hurwitz or by substituting s = j*omega into the characteristic equation
The root locus is a graphical design tool: by choosing the gain K at a desired closed-loop pole location, the designer directly controls the transient response characteristics (damping ratio, natural frequency). Adding poles or zeros to G(s) (via compensators) reshapes the locus to guide closed-loop poles to desirable regions.
7. Bode Plots and Frequency Response
A Bode plot displays the frequency response G(j*omega) on semi-logarithmic axes: the magnitude |G(j*omega)| in decibels versus log(omega), and the phase angle angle(G(j*omega)) in degrees versus log(omega). Asymptotic approximations make Bode plots tractable by hand for rational transfer functions.
Asymptotic Bode Rules
| Factor | Magnitude Slope | Phase Contribution |
|---|---|---|
| K (gain) | flat at 20 log|K| dB | 0° (K>0) or ±180° (K<0) |
| (j*omega)^N (integrators) | -20N dB/decade | -90N° (constant) |
| (1 + j*omega/omega_z) zero | 0 below omega_z, +20 above | +45° at omega_z, +90° total |
| 1/(1 + j*omega/omega_p) pole | 0 below omega_p, -20 above | -45° at omega_p, -90° total |
| Second-order resonant pole | -40 dB/decade above omega_n | -90° at omega_n, -180° total |
Gain and Phase Margins
Aim for GM > 6 dB and PM > 45°. Higher phase margin gives more damping but lower bandwidth. A phase margin of 60° gives approximately 8% overshoot and corresponds to zeta ≈ 0.6. Systems with PM near 0° exhibit large resonant peaks and poor transient behavior.
8. Nyquist Stability Criterion
The Nyquist criterion uses Cauchy's argument principle: if a closed contour in the s-plane is mapped through a complex function F(s), the number of clockwise encirclements of the origin equals Z - P, where Z is the number of zeros and P the number of poles of F(s) inside the contour.
For closed-loop stability analysis, let F(s) = 1 + G(s)H(s) (the characteristic equation). The Nyquist contour encircles the entire right half-plane. Encirclements of the origin by F(s) correspond to encirclements of the critical point (-1, 0) by G(s)H(s).
Nyquist Plot for Dead-Time Systems
A pure time delay e^(-T*s) in the loop introduces a frequency-dependent phase shift of -omega*T radians. In the Nyquist plot this causes the locus to spiral, potentially creating multiple crossings of the negative real axis. Bode analysis can miss instability when the phase crosses -180° multiple times; the Nyquist criterion handles these cases correctly by counting all encirclements.
The Nyquist criterion also underpins the concept of gain and phase margins in a geometrically intuitive way: gain margin is the reciprocal of the magnitude of G(j*omega_pc)*H(j*omega_pc) — how far the Nyquist plot is from the critical point along the negative real axis — while phase margin is the angle between the Nyquist plot at unity gain and the critical point.
9. PID Controllers
PID Control Law
Role of Each Term
Immediate reaction to current error. Increases speed of response and reduces (but does not eliminate) steady-state error. Too large → oscillation and instability.
Accumulates past error, guaranteeing zero steady-state error for constant references. Adds a pole at the origin (increases system type). Can cause integrator windup — addressed with anti-windup schemes.
Reacts to rate of change of error, providing predictive damping. Reduces overshoot and improves phase margin. Amplifies high-frequency noise — often paired with a derivative filter N: Kd*s / (1 + s/N).
Ziegler-Nichols Tuning (Closed-Loop Method)
The closed-loop (ultimate gain) method involves increasing Kp with Ki = 0, Kd = 0 until the system reaches sustained oscillations. The critical gain K_u and oscillation period T_u are measured, then the PID parameters are set using the Ziegler-Nichols table:
| Controller Type | Kp | Ti | Td |
|---|---|---|---|
| P | 0.5 * K_u | ∞ | 0 |
| PI | 0.45 * K_u | T_u / 1.2 | 0 |
| PID | 0.6 * K_u | T_u / 2 | T_u / 8 |
Ziegler-Nichols Open-Loop (Reaction Curve) Method
Apply a step input to the open-loop plant and observe the output. Fit a first-order plus dead-time (FOPDT) model: the process gain K_p = delta_y / delta_u, the dead time L (lag before response begins), and the time constant T (63.2% of final value after the lag). Then use:
Anti-Windup
Integrator windup occurs when the actuator saturates: the integral term continues accumulating even though the output cannot increase further, producing a large overshoot once saturation ends. Anti-windup schemes include clamping (stop integration when saturated), back-calculation (subtract a proportional fraction of the saturation error from the integrator), and conditional integration (integrate only when not saturated).
10. State-Space Representation
The state-space description expresses an n-th order system as n coupled first-order equations:
Solution and Matrix Exponential
Controllability
A system (A, B) is controllable if it is possible to drive the state from any initial condition to any desired state in finite time using an appropriate input. The controllability matrix is:
Observability
A system (A, C) is observable if the initial state x(0) can be uniquely determined from the output y(t) over a finite time interval. The observability matrix is:
Controllability and observability are dual concepts related by the PBH (Popov-Belevitch-Hautus) tests. A minimal realization — one with no uncontrollable or unobservable modes — is the smallest-order transfer function representation of a system. Kalman's decomposition theorem states that any system can be decomposed into four subsystems based on controllability and observability, with only the controllable-and-observable subsystem appearing in the transfer function.
11. State Feedback and Pole Placement
If the full state vector x(t) is measurable, apply the control law u(t) = -K*x(t) + N*r(t), where K is the state feedback gain vector (1×n for SISO) and N is a reference prefilter. The closed-loop system becomes:
Ackermann's Formula
For SISO systems, the feedback gain vector K is computed by Ackermann's formula:
Observer (Luenberger Observer)
When the full state is not directly measurable, a Luenberger observer (state estimator) reconstructs the state from measurements:
The separation principle states that, for linear systems, the state feedback gain K and the observer gain L can be designed independently. Combining observer-based state estimation with state feedback produces the output feedback controller, also known as a dynamic compensator or observer-based compensator, which has order equal to the plant order n.
12. LQR — Linear Quadratic Regulator
The Linear Quadratic Regulator finds the optimal state feedback gain K that minimizes a quadratic cost function balancing state regulation and control effort:
Tuning the LQR
The matrices Q and R embody the designer's priorities. A common starting point is Bryson's rule: set Q = diag(1/x_i_max^2) and R = diag(1/u_j_max^2), normalizing by the maximum allowable values of each state and input. Increasing Q relative to R penalizes state deviations more, resulting in faster (more aggressive) control at the cost of larger control effort. Decreasing Q relative to R yields more sluggish but energy-efficient control.
The LQR solution guarantees stability: if (A, B) is stabilizable and (A, Q^(1/2)) is detectable, the Riccati equation has a unique positive semi-definite solution P and the resulting closed-loop matrix (A - B*K) is asymptotically stable. Furthermore, the LQR has guaranteed stability margins: infinite gain margin and at least 60° phase margin in each input channel — a property that generally does not hold when an observer is inserted (motivating the Linear Quadratic Gaussian (LQG) design).
13. Kalman Filter
The Kalman filter is an optimal linear state estimator for stochastic systems. The plant model includes process noise w and measurement noise v:
Discrete-Time Kalman Filter
In practice, Kalman filters run on digital processors in discrete time. The discrete system:
LQG — Linear Quadratic Gaussian Control
Combining the LQR state feedback gain K with a Kalman filter (the optimal observer) yields the LQG controller. The separation principle applies: K and K_f are designed independently by solving two Riccati equations (one for the regulator, one for the filter). The resulting output feedback controller u = -K * x-hat is optimal with respect to the combined quadratic cost under Gaussian disturbances. Note that LQG does NOT guarantee the robust stability margins of pure LQR — it can have arbitrarily small gain and phase margins, motivating LQG/LTR (Loop Transfer Recovery) designs.
14. Lead and Lag Compensators
Classical compensators are added to the forward path to shape the open-loop Bode plot and achieve desired gain and phase margins. They take the form of a real-axis pole-zero pair.
Lead Compensator
A lead compensator adds phase lead (improves phase margin), increases bandwidth, and speeds up the transient response. The zero is placed below the crossover frequency and the pole above it, so the compensator acts like a differentiator over the crossover region.
Lag Compensator
A lag compensator improves steady-state accuracy by increasing the low-frequency gain, but it adds phase lag near the crossover frequency (worsening phase margin). To minimize this penalty, place the zero and pole well below the desired crossover frequency (by a factor of 10 or more), so the phase lag is negligible at crossover.
Lead-Lag Compensator
A combined lead-lag compensator provides both the accuracy improvement of a lag section and the phase margin improvement of a lead section: C(s) = K_c * (s + z_1)(s + z_2) / ((s + p_1)(s + p_2)) with p_1 < z_1 (lag section) and z_2 < p_2 (lead section). This is the frequency-domain equivalent of a PID controller: the lag section provides integral-like low-frequency boost and the lead section provides derivative-like high-frequency lead.
15. Cascade vs. Feedback Control
Cascade (Series) Control
Cascade control uses two nested feedback loops: an inner (secondary) loop and an outer (primary) loop. The outer loop sets the reference for the inner loop. The inner loop corrects disturbances that enter between the two loops much faster than the outer loop alone could respond.
r ──► [C_outer] ──► r_inner ──► [C_inner] ──► u ──► [Plant] ──► y
↑(-) ◄─────────── y_outer ──────────────────────────┘
↑(-) ◄── y_inner (inner measurement) ──────┘Cascade control is effective when: the inner variable responds faster than the outer, the disturbance primarily affects the inner loop, the inner loop can be closed with a simple P or PI controller. Common example: temperature (outer) / heat flow (inner) control in a chemical reactor.
Feedback vs. Feedforward
Feedback control is reactive: it acts only after the error appears. Feedforward control is proactive: it measures the disturbance directly and compensates before the plant output is affected. The ideal feedforward compensator is F(s) = -G_d(s) / G(s), where G_d is the disturbance-to-output transfer function. Perfect feedforward cancellation requires an exact plant model and is only practical for measurable disturbances.
In practice, feedforward and feedback are combined: feedforward handles measurable disturbances rapidly and feedback corrects for modeling errors and unmeasured disturbances. The combined architecture provides both fast disturbance rejection and robustness.
16. Nonlinear Control Introduction
Phase Plane Analysis
For second-order nonlinear systems x-dot_1 = f_1(x_1, x_2), x-dot_2 = f_2(x_1, x_2), the phase plane plots trajectories in the (x_1, x_2) state space. The slope of a trajectory at any point is dx_2/dx_1 = f_2/f_1. Equilibrium points are where f_1 = f_2 = 0; their stability is determined by the eigenvalues of the Jacobian linearization at each equilibrium.
Typical phase-plane features include stable nodes (spiral or non-oscillatory convergence), unstable nodes, saddle points, centers (conservative oscillation), and limit cycles (isolated closed trajectories — stable or unstable). The Poincaré-Bendixson theorem states that bounded trajectories in the plane that avoid equilibrium points must approach a limit cycle.
Lyapunov Functions for Nonlinear Systems
For a nonlinear system x-dot = f(x), constructing a Lyapunov function V(x) provides global stability certificates without linearization. The energy interpretation guides construction: for mechanical systems, V = T + U (kinetic + potential energy) often works. For control design, control Lyapunov functions (CLF) allow systematic controller synthesis: a CLF is a V such that there always exists a control input u that makes V-dot negative.
Feedback Linearization
Feedback linearization (input-output linearization) uses a nonlinear coordinate transformation and state feedback to cancel the system nonlinearities exactly, producing a linear system in the new coordinates on which standard linear control design applies. For the system y^(n) = f(x) + g(x)*u, choose u = (v - f(x)) / g(x) to obtain y^(n) = v, a pure integrator chain. Then design a linear controller for v to place closed-loop poles.
Feedback linearization requires an accurate model (sensitive to uncertainty), requires g(x) ≠ 0 (relative degree condition), and may leave internal dynamics (zero dynamics) that must be independently stable (minimum phase requirement). When these conditions hold, it provides an exact global linearization rather than a local approximation.
17. Discrete-Time Control and the Z-Transform
Z-Transform Basics
The Z-transform is the discrete-time analog of the Laplace transform. For a causal sequence x[k]:
Mapping s-Plane to z-Plane
Sampling at period T maps the s-plane to the z-plane via z = e^(s*T). Key correspondences:
| s-Plane Region | z-Plane Region |
|---|---|
| Left half-plane (stable) | Inside unit circle |z| < 1 |
| Imaginary axis (j*omega) | Unit circle |z| = 1 |
| Right half-plane (unstable) | Outside unit circle |z| > 1 |
| Origin s = 0 | z = 1 |
| s = -1/T (real pole) | z = e^(-1) ≈ 0.368 |
Discrete-Time State-Space
Digital PID Implementation
Converting a continuous PID to discrete time using the Tustin (bilinear) approximation s ≈ 2*(z-1)/(T*(z+1)) preserves frequency-domain characteristics better than forward Euler (s ≈ (z-1)/T) or backward Euler. The discrete PID difference equation:
Sampling Considerations
The sampling period T must satisfy the Nyquist criterion for the bandwidth of the closed-loop system: T < 1/(2*f_BW). In practice, sample at 10 to 20 times the bandwidth frequency to minimize discretization effects. Very fast sampling increases sensitivity to quantization noise; too slow sampling introduces phase lag that degrades stability margins. The rule of thumb for digital control: omega_s = 2*pi/T should be at least 20 times the desired closed-loop bandwidth.
18. Frequently Asked Questions
What is a transfer function in control theory?
A transfer function G(s) is the ratio of the Laplace transform of the output Y(s) to the Laplace transform of the input U(s), assuming zero initial conditions. It is a rational function of the complex frequency variable s, expressed as a ratio of polynomials N(s)/D(s). The roots of N(s) are zeros; the roots of D(s) are poles. Transfer functions fully characterize LTI systems and enable frequency-domain stability and performance analysis.
What is the difference between open-loop and closed-loop control?
Open-loop control computes the control action from the reference alone with no measurement of the output — disturbances go uncorrected. Closed-loop control measures the output, computes the error e = r - y, and drives the plant to reduce that error. Closed-loop systems reject disturbances, tolerate plant variations, and can track references with small steady-state error, but may become unstable if gain or phase margins are violated.
What is the Routh-Hurwitz stability criterion?
The Routh-Hurwitz criterion determines how many roots of a characteristic polynomial lie in the right half-plane without computing the roots explicitly. Arrange polynomial coefficients into a Routh array; the number of sign changes in the first column equals the number of unstable (RHP) roots. The system is stable if and only if all first-column entries are positive. Special cases (zero entries, all-zero rows) require special handling with auxiliary polynomials.
What are gain margin and phase margin and why do they matter?
Gain margin is the factor by which the open-loop gain can be increased before the closed-loop becomes unstable, measured in dB at the phase crossover frequency (where phase = -180 degrees). Phase margin is the additional phase lag that would bring the system to instability, measured at the gain crossover frequency (where gain = 0 dB). Design targets of GM > 6 dB and PM > 45 degrees ensure well-damped, robust behavior against gain variations and modeling uncertainty.
How does a PID controller work and what does each term do?
A PID controller outputs u(t) = Kp*e + Ki*integral(e) + Kd*de/dt. The proportional term provides immediate reaction to the current error; the integral term accumulates past error to eliminate steady-state offset; the derivative term anticipates future error from its rate of change to damp oscillations. Ziegler-Nichols and other tuning methods systematically determine Kp, Ki, Kd to balance speed, overshoot, and robustness.
What is the state-space representation and why is it used?
State-space describes a system as x-dot = Ax + Bu, y = Cx + Du — first-order matrix ODEs. Unlike scalar transfer functions, state-space handles MIMO systems naturally, enables simulation by numerical integration, and supports modern methods: pole placement, LQR optimization, Kalman filtering. The eigenvalues of A are the poles; controllability and observability matrices determine whether each mode is accessible to the input/output.
What is the Kalman filter and when is it used?
The Kalman filter is an optimal recursive estimator that minimizes mean-squared state estimation error for linear systems with Gaussian process and measurement noise. It alternates between a predict step (propagate state estimate and covariance forward using the model) and an update step (correct using the difference between the actual measurement and the predicted measurement, weighted by the Kalman gain). It is used in GPS navigation, inertial navigation, robotics, and any control system requiring state estimation from noisy sensors.
What is the Nyquist stability criterion?
The Nyquist criterion applies Cauchy's argument principle to determine closed-loop stability from the open-loop frequency response. Plot G(j*omega)*H(j*omega) for omega from -infinity to +infinity (the Nyquist plot). The number of clockwise encirclements N of the critical point (-1, 0) equals Z - P, where Z is the number of closed-loop RHP poles and P is the number of open-loop RHP poles. For stable open loop (P=0), the system is stable iff the Nyquist plot does not encircle (-1, 0).