Singular control
Encyclopedia
In optimal control
Optimal control
Optimal control theory, an extension of the calculus of variations, is a mathematical optimization method for deriving control policies. The method is largely due to the work of Lev Pontryagin and his collaborators in the Soviet Union and Richard Bellman in the United States.-General method:Optimal...

, problems of singular control are problems that are difficult to solve because a straightforward application of Pontryagin's minimum principle
Pontryagin's minimum principle
Pontryagin's maximum principle is used in optimal control theory to find the best possible control for taking a dynamical system from one state to another, especially in the presence of constraints for the state or input controls. It was formulated by the Russian mathematician Lev Semenovich...

 fails to yield a complete solution. Only a few such problems have been solved, such as Merton's portfolio problem
Merton's portfolio problem
Merton's Portfolio Problem is a well known problem in continuous-time finance. An investor with a finite lifetime must choose how much to consume and must allocate his wealth between stocks and a risk-free asset so as to maximize expected lifetime utility. The problem was formulated and solved by...

 in financial economics
Financial economics
Financial Economics is the branch of economics concerned with "the allocation and deployment of economic resources, both spatially and across time, in an uncertain environment"....

 or trajectory optimization
Trajectory optimization
Trajectory optimization is the process of designing a trajectory that minimizes or maximizes some measure of performance within prescribed constraint boundaries...

 in aeronautics. A more technical explanation follows.

The most common difficulty in applying Pontryagin's principle arises when the Hamiltonian depends linearly on the control , i.e., is of the form: and the control is restricted to being between an upper and a lower bound: . To minimize , we need to make as big or as small as possible, depending on the sign of , specifically:


If is positive at some times, negative at others and is only zero instantaneously, then the solution is straightforward and is a bang-bang control
Bang-bang control
In control theory, a bang–bang controller , also known as a hysteresis controller, is a feedback controller that switches abruptly between two states. These controllers may be realized in terms of any element that provides hysteresis...

 that switches from to at times when switches from negative to positive.

The case when remains at zero for a finite length of time is called the singular control case. Between and the maximization of the Hamiltonian with respect to u gives us no useful information and the solution in that time interval is going to have to be found from other considerations. (One approach would be to repeatedly differentiate with respect to time until the control u again explicitly appears, which is guaranteed to happen eventually. One can then set that expression to zero and solve for u. This amounts to saying that between and the control is determined by the requirement that the singularity condition continues to hold. The resulting so-called singular arc will be optimal if it satisfies the Kelley condition:


. This condition is also called the generalized Legendre-Clebsch condition
Legendre-Clebsch condition
In the calculus of variations the Legendre-Clebsch condition is a second-order condition which a solution of the Euler-Lagrange equation must satisfy in order to be a maximum .For the problem of maximizing...

).

The term bang-singular control refers to a control that has a bang-bang portion as well as a singular portion.
The source of this article is wikipedia, the free encyclopedia.  The text of this article is licensed under the GFDL.
 
x
OK