Lyapunov stability
From Free net encyclopedia
Lyapunov stability is applicable to only unforced (no control input) dynamical systems. It is used to study the behaviour of dynamical systems under initial perturbations around equilibrium points.
Let us consider that the origin is an equilibrium point (EP) of the system.
A system is said to be stable about the equilibrium point "in the sense of Lyapunov" if for every ε, there is a δ such that:
- <math>
\|x(t_o)\| < \delta \quad \implies \quad \|x(t)\| < \epsilon \quad \forall t \in R^{+}</math>
The system is said to be asymptotically stable if as
- <math> t \rightarrow \infty, \quad \|x(t)\| \rightarrow 0 (EP) </math>
Contents |
Lyapunov stability theorems
Lyapunov stability theorems give only sufficient condition.
Lyapunov second theorem on stability
Consider a function V(x) : Rn → R such that
- <math>V(x) > 0 : \forall{x} \neq 0</math> (positive definite)
- <math> \dot{V}(x) < 0 </math> (negative definite)
Then V(x) is called a Lyapunov function candidate and the system is asymptotically stable in the sense of Lyapunov.
It is easier to visualise this method of analysis by thinking of a physical system (e.g. vibrating spring and mass) and considering the energy of such a system. If the system loses energy over time and the energy is never restored then eventually the system must grind to a stop and reach some final resting state. This final state is called the attractor. However, finding a function that gives the precise energy of a physical system can be difficult, and for abstract mathematical systems, economic systems or biological systems, the concept of energy may not applicable.
Lyapunov's realisation was that stability can be proven without requiring knowledge of the true physical energy, providing a Lyapunov function can be found to satisfy the above constraints.
Stability for state space models
A state space model <math>\dot{\textbf{x}} = A\textbf{x}</math> is asymptotically stable if
- <math>A^{T}M + MA + N = 0</math>
has a solution where <math>N = N^{T} > 0</math> and <math>M = M^{T} > 0</math> (positive definite matrices). (The relevant Lyapunov function is <math>V(x) = x^TMx</math>.)
Example
Consider the Van der Pol oscillator equation:
<math> \ddot{y} + y -\epsilon \left( \frac{\dot{y}^{3}}{3} - \dot{y}\right) = 0< 0 </math>
Let <math> x_{1} = y , \dot{x_{1}} = x_{2} </math> so that the corresponding system is
<math> \dot{x_{1}} = x_{2} , \dot{x_{2}} = -x_{1} + \epsilon \left( \frac{\dot{x_{2}}^{3}}{3} - \dot{x_{2}}\right) </math>
Let us choose as a Lyapunov function
<math> V = \frac {1}{2} \left(x_{1}^{2}+x_{2}^{2} \right) </math>
which is clearly positive definite. Its derivative is
<math> \dot{V} = x_{1} \dot x_{1} +x_{2} \dot x_{2} </math> <math> = x_{1} x_{2} - x_{1} x_{2}+\epsilon \left(\frac{x_{2}^4}{3} -{x_{2}^2}\right)</math> <math> = -\epsilon \left({x_{2}^2} - \frac{x_{2}^4}{3}\right)</math>
If the parameter <math> \epsilon </math> is positive, stability is asymptotic for <math> x_{2}^{2} < 3 </math>
Barbalat's lemma and stability of time-varying systems
Assume that f is function of time only.
- If <math>\dot{f}(t) \to 0</math> does not imply that f(t) has a limit at <math>t\to\infty</math>
- If <math>f(t)</math> has a limit as <math>t \to \infty</math> does not imply that <math>\dot{f}(t) \to 0</math>.
- If <math>f(t)</math> is lower bounded and decreasing (<math>\dot{f}\le 0</math>), then it converges to a limit. But it does not say whether <math>\dot{f}\to 0</math> or not as <math>t \to \infty</math>.
Barbalat's Lemma says that If <math>f(t)</math> has a finite limit as <math>t \to \infty</math> and if <math>\dot{f}</math> is uniformly continuous (or <math>\ddot{f}</math> is bounded), then <math>\dot{f}(t) \to 0</math> as <math>t \to\infty</math>.
But why do we need a Barbalat's lemma?
Usually, it is difficult to analyze the *asymptotic* stability of time-varying systems because it is very difficult to find Lyapunov functions with a *negative definite* derivative.
What's the big deal about it? We have invariant set theorems when <math>\dot{V}</math> is only NSD.
Agreed! We know that in case of autonomous (time-invariant) systems, if <math>\dot{V}</math> is negative semi-definite (NSD), then also, it is possible to know the asymptotic behaviour by invoking invariant-set theorems.
But this flexibility is not available for *time-varying* systems.
This is where "Barbalat's lemma" comes into picture. It says:
IF <math>V(x,t)</math> satisfies following conditions:
(1) <math>V(x,t)</math> is lower bounded
(2) <math>\dot{V}(x,t)</math> is negative semi-definite (NSD)
(3) <math>\dot{V}(x,t)</math> is uniformly continuous in time (i.e, <math>\ddot{V}</math> is finite)
then <math>\dot{V}(x,t)\to 0</math> as <math>t \to \infty</math>.
But how does it help in determining asymptotic stability?
There is a nice example on page 127 of "Slotine Li's book on Applied Nonlinear control"
consider a non-autonomous system
<math>\dot{e}=-e + g\cdot w(t)</math>
<math>\dot{g}=-e \cdot w(t)</math>
This is non-autonomous because the input w is a function of time. Let's assume that the input w(t) is bounded.
If we take <math>V=e^2+g^2</math> then <math>\dot{V}=-2e^2 \le 0</math>
This says that <math>V(t)<=0</math> by first two conditions and hence e and g are bounded. But it does not say anything about the convergence of e to zero. Moreover, we can't apply invariant set theorem, because the dynamics is non-autonomous.
Now let's use Barbalat's lemma:
<math>\ddot{V}= -4e(-e+g\cdot w)</math>. This is bounded because e, g and w are bounded. This implies <math>\dot{V} \to 0</math> as <math>t\to\infty</math> and hence <math>e \to 0</math>. If we are interested in error convergence, then our problem is solved.