|
Ron Hirschorn
Time: 9:10 - 10
Title: Control Variations under Feedback
Abstract: Control variations can be implemented as sampled data feedback controllers with the variations generated during the sample intervals. This approach can simplify nonlinear controller design and achieve asymptotic or practical stabilization.
|
Dmitry Voytsekhovsky
Time: 10-10:30
Title: Chronological calculus and its application in solving stabilization problems for single-input nonlinear systems
Abstract: In this talk we give a brief overview of chronological calculus, a newly developed branch of applied Mathematics, which finds its applications in nonlinear dynamical system theory and control. The calculus reflects the most general properties of flows and is based on exponential representation of flows defined by nonstationary fields. Employing techniques developed in chronological calculus we introduce a piecewise constant control algorithm for generating extra control directions corresponding to Lie brackets of the drift and control vector fields of a system. Then we use the new directions to stabilize the system around its equilibrium state. The aforementioned control scheme might be useful for a class of systems having uncontrollable linearization. The effect of so called "bad brackets" is also addressed.
|
Velimir Jurdievic
Time: 10:45-11:35
Title: Optimal Control Problems on Lie Groups
Click here for the abstract.
|
Julie Vale
Time: 11:35-12:05
Title: The Robust Performance and Stabilisation Problem
Abstract: In this talk we will discuss a control approach for systems with the following structure: A known LTI plant with an unknown (structured) time varying gain at the input. The control objective is that of tracking; the class of signals to be tracked is (roughly) modelled by a stable filter at the input.
If our plant were purely LTI and known, then we could use classical control design techniques (e.g. LQR, lead lag, PID, etc.) to design an LTI controller (which produces what we will call the ideal control signal) to achieve our tracking goal. Now, if the time varying gain were known, then the ideal control signal divided by that gain would achieve the same result. Unfortunately, we do not know the gain, so instead we will apply an estimate. This estimation will take place in two stages: First, we approximate the gain with a polynomial (this approximation takes place offline), and then we use estimates of that polynomial to approximate the desired control signal. The resulting controller is can be expressed as a nonlinear controller or as a linear periodic controller.
In this talk we will outline the concept behind this control approach as well as discuss the details of the estimation for this case of uncertainty.
|
Kirsten Morris
Time: 13:30-14:20
TItle: Feedback Invariance for Infinite-Dimensional Systems
Abstract: A subspace V is feedback invariant if for all initial conditions in V there exists a feedback control that keeps the state in V for all times. The zero dynamics are the dynamics of the controlled system on the largest invariant subspace in the kernel of the observation operator, C. For a linear finite-dimensional system, the control can be a constant state feedback K, and the eigenvalues of this system are the invariant zeros of the original system.
Many questions remain in the theory of zeros for infinite-dimensional systems. For instance, let (A,B,C) indicate a state-space realization. The largest invariant subspace in the kernel of C might not exist. Even if such a subspace exists, if the feedback K is not bounded, the zero dynamics may not be well-posed. Results on feedback by unbounded operators enable us to obtain a characterization of when systems with finite relative degree have a largest feedback invariant subspace in the kernel of C.
An application of this work is disturbance decoupling. A disturbance may be decoupled from the output if and only if it lies inside an invariant subspace of the kernel of C. The research on feedback invariance can be used to obtain feedback operators that provide disturbance decoupling.
|
Zhiyun Lin
Time: 14:20-15:10
Title: On the state agreement problem of coupled nonlinear systems with time-varying interactions
Abstract: In this talk, we consider the state agreement problem with the objective to ensure the asymptotic coincidence of all states of multiple nonlinear dynamical systems. A general interconnection of nonlinear subsystems is treated, where the vector fields can switch within a finite family. Associated to each vector field is a directed graph based in a natural way on the interaction structure of the subsystems. Generalizing work of Moreau, under the assumption that the vector fields satisfy a certain sub-tangentiality condition, we show that asymptotic state agreement can be achieved if and only if the dynamic interaction digraph has the property of being sufficiently connected over time. Applications of the main result are then made to the synchronization of coupled Kuramoto oscillators with time-varying interaction and to the synthesis of a rendezvous controller for a multi-agent system.
|
Daniel Miller
Time: 15:40-16:30
Title: The Delay Margin Problem
Abstract: Handling delays in control systems is difficult and is of long-standing interest. It is well known that, given a strictly causal finite dimensional linear time-invariant (LTI) plant and a finite dimensional LTI stabilizing controller, closed-loop stability will be maintained under a small delay in the feedback loop, i.e. there is a non-zero ``delay margin''. However, in some situations there is a large delay, perhaps arising from a slow communications link or a large but variable computational delay (e.g., in a system which uses image processing). While there are techniques available to design a controller to handle a known delay, there is no general theory for designing a controller to handle a large uncertain delay. Here we start with an LTI plant, with a control objective of closed loop stability, and consider the following question: is there a limit to the amount of uncertain delay that can be tolerated when using a linear time-invariant, linear time-varying, and nonlinear controller, respectively?
|
Elsa Hansen
Time: 16:30-17
Title: Designing a discontinuous stabilizing feedback for a class of controllable systems.
Abstract: The problems of controllability and stabilizability are fundamental to control theory. The relationship between the two ideas has proven to be, at best, indirect. For instance, controllability from a point does not imply smooth stabilizability to that same point and a smoothly stabilizable system need not be controllable. These results suggest that some grounds might be gained by considering more general types of stabilizing feedback. For instance, one might study the relationship between controllability and stabilizability via discontinuous feedback.
As a first step towards understanding this more general connection, we look at how the geometry of a certain class of systems, controllable at second order, can be used to design a discontinuous stabilizing feedback. A key feature of this work is the deviation from traditional Lyapunov methods and a focus on using the systems underlying geometry to design a stabilizing feedback.
|
Martin Guay
Time: 17-17:50
Title: Nonlinear Model Predictive Control: Real-time methods and Robustness
Abstract: Model predictive control (MPC) methods based upon the online minimization of an optimal control problem have gained significant interest in the last few decades, in particular amongst the process industries where process dynamics are relatively slow with respect to computational time requirements. Computational requirements constitute the primary limitation for application of current MPC methods, both within the process sector for systems of large size and complexity, as well as for applications in other sectors such as robotics or aerospace, where systems tend to have faster dynamics. In this presentation, we propose a new formulation of continuous-time nonlinear model predictive control (NMPC) in which the parameters defining the input trajectory are adapted continuously in real time. Continuous implementation of the control as the input parameterization is being optimized reduces the impact of computational delay, in particular in response to process disturbances. Within the context of this new technique, we analyze the nominal robustness of of MPC and provide some tools to deal to enhance the robust performance of MPC. The application of adaptive MPC is briefly discussed.
|
|
|
|
Andrew Lewis
Time: 9 - 9:50
Title: Energy Shaping
Abstract: Stability of mechanical systems with kinetic minus potential energy Lagrangians is a classical subject. In the best cases, stability (and more) can be directly inferred from the properties of the potential function. For this reason, the idea of using feedback to shape the closed-loop energy of a mechanical system is an attractive thing to do. If one wishes to only shape the potential energy, then it is possible to understand the set of closed-loop potential functions using the classical Frobenius Theorem; this has been known since the work of van der Schaft in 1986. However, the set of systems that can be stabilised using energy shaping is enlarged if one also allows the shaping of the closed-loop kinetic energy. Unfortunately (or fortunately if you like geometry), this complicates the problem enormously by its leading to overdetermined quasi-linear partial differential equations for the closed-loop kinetic energy.
In this talk we will discuss some of the many open problems in this idea of energy shaping. We will also indicate what few results exist concerning the integrability of the energy shaping partial differential equations.
|
David Tyner
Time: 9:50-10:20
Title: Jacobian linearisation in a geometric setting
Abstract: In this talk, Jacobian linearisation along nontrivial reference trajectories is presented from a differential geometric perspective. We shall see that the standard techniques, that apply to control affine systems on R^n, do not transfer directly to a geometric theory. Indeed, replacing R^n with a differentiable manifold poses several problems if one wishes not to work in a specific coordinate chart.
To build a coordinate-invariant framework for Jacobian linearisation we employ the abstract setting of ``affine systems''. In this framework we define the geometric linearisation and characterise the controllability by giving an alternate version of the usual controllability test for time-varying linear systems. Also, stability and stabilisation by quadratic optimal control will be formulated in this setting.
The motivation for this is not to broaden the applicability of linearisation techniques, but to better understand the structure of linearisation.
|
Peter Caines
Time: 10:40-11:30
Title: Optimal Control of Hybrid Systems: Theory and Algorithms
Abstract: A standard class of hybrid control systems $H$ consists of a set of differential control systems parameterized by a finite state set denoted $Q$ and possessing a state space $Q X M,$ where $M$ is a differentiable manifold. Discrete switchings between the controlled vector fields occur subject to a discrete control input or when the continuous component of the state enters specific co-dimension one sub-manifolds. We present a general result of the Hybrid Minimum Principle (HMP) type for $H$. Based upon the HMP, the set of so-called computationally efficient HMP algorithms is constructed for the case where the sequence of discrete states along an optimal trajectory is specified a priori. Then, using the notion of Optimality Zones (OZs), the class of HMPOZ algorithms is constructed which compute optimal hybrid trajectories without constraints on the location sequence and which consequently search over all feasible sequences. Subject to an initial computational investment independent of the number $L$ of switchings between locations, the computational complexity of HMPOZ algorithms is linear in $L$.
Work with Shahid Shaikh.
|
Chris Nielsen
Time: 11:30-12
Title: Transverse Feedback Linearization for Multi-Input Systems
Abstract: In this talk I will introduce the transverse feedback linearization problem for multi-input control affine dynamical systems. This presentation is a continuation on the theme of my talk given at the previous meeting in 2004. After introducing the problem with an example, I will outline our complete characterization of the local solution, the obstacles to obtaining global solutions, and the relationship between transverse feedback linearization and conventional feedback linearization. The talk will finish with a pair of examples.
|
Mireille Broucke
Time: 13:30-14:20
Title: A Viability Problem for Control Affine Systems
Abstract: In this talk I will explore the relationship between the theory of viability kernels as it is typically formulated for differential inclusions and the corresponding theory for control affine systems. The approach is to apply to control affine systems the characterization of the viability kernel obtained by the so-called Frankowska method [Aubin 2001]. Under reasonable conditions on the control affine system, this yields a surprizingly simple formulation of a viability controller and an analytical - rather than numerical - characterization of the viability kernel. I will highlight the importance of nonsmooth analysis tools to arrive at a rigorous formulation of the solution. Finally, using this theory, I will summarize a solution to the problem of "least restrictive" collision avoidance control of two unicycles.
|
Abdol-Reza Mansouri
Time: 14:20-15:10
TItle: Stabilization to a submanifold
Abstract: We consider the problem of asymptotic stabilization to a submanifold of R^n using a continuous feedback law; this is an immediate generalization of the problem of asymptotic stabilization to a point. We present a necessary condition for the existence of such a feedback law, which reduces to Coron's necessary condition (and, by implication, Brockett's) in case the submanifold is a point in R^n. We show relations between this necessary condition and the topology of the submanifold, and illustrate it on a few examples.
|
Brian Ingalls
Time: 15:10-16:00
Title: Control Theoretic Approaches in Systems Biology
Abstract: As molecular biology continues to reveal the ever-increasing complexity of cellular mechanisms, it becomes clear that the standard reductionist approach cannot succeed unaided. System-wide descriptions show that cellular networks are dominated by regulatory interactions at the biochemical and genetic levels. These regulatory networks show analogy to the automatic feedback devices designed by control engineers. It is natural, then, to make use of the theory developed for the design of such man-made systems to aid in the reverse-engineering of cellular interactions.
This talk will begin by introducing the mechanisms of regulation of cellular networks. The potential contribution of control-theoretic analysis will be illustrated by considering case studies from metabolic,signal transduction, and genetic networks.
|
|
|