ISBN:
Released:
Publisher:
Availability:
Description:
Extremum seeking, a popular tool in control applications in the 1940s-1960s, has seen a return as an exciting research topic and industrial real-time optimization tool in the 1990's. Extremum seeking is also a method of adaptive control but it does not fit into the classical paradigm or model reference and related schemes, which deal with the problem of stabilization of a known reference trajectory or set point.
A second distinction between classical adaptive control and extremum seeking is that the latter is not model based. As such, it provides a rigorous, high-performance alternative to control methods involving neural networks. Its non-model based character explains the resurgence in popularity of extremum seeking in the last half a decade: the recent applications in fluid flow, combustion, and biomedical systems are all characterized by complex, unreliable models.
Extremum seeking is applicable in situations where there is a nonlinearity in the control problem, and the nonlinearity has a local minimum or a maximum. The nonlinearity may be in the plant, as a physical nonlinearity, possibly manifesting itself through an equilibrium map, or it may be in the control objective, added to the system through a cost functional of an optimization problem. Hence, one can use extremum seeking both for tuning a set point to achieve an optimal value of the output, or for tuning parameters of a feedback law. The parameter space can be multivariable, a case we cover extensively in this book.
This book overviews the efforts made over the last seven years to put extremum seeking on a rigorous analytical footing and to make improvement of performance in extremum seeking schemes systematic. Stability guidelines that have been developed are applicable not only to static maps but also to systems that combine static maps with dynamics in virtually any form, with the single restriction that the dynamics be open loop stable. The main accomplishment during the recent period, to which this book is dedicated, is achieving convergence to the optimum on a time scale comparable to the time scale of the plant dynamics. In other words, one does not have to try one set of parameters, wait for the plant transient to settle, try another set of parameters, wait again, compare the results, try again, and so on. The convergence of the parameters (set points, gains, etc.) occurs over a period comparable to the length of the plant transients.