Helsinki University of Technology Control Engineering Espoo 2010
Report 167
WIRELESS CONTROL SYSTEM SIMULATION AND NETWORK ADAPTIVE CONTROL Mikael Björkbom
AALTO UNIVERSITY SCHOOL OF SCIENCE AND TECHNOLOGY DEPARTMENT OF AUTOMATION AND SYSTEMS TECHNOLOGY
Helsinki University of Technology Control Engineering Espoo October 2010
Report 167
WIRELESS CONTROL SYSTEM SIMULATION AND NETWORK ADAPTIVE CONTROL Mikael Björkbom Doctoral dissertation for the degree of Doctor of Science in Technology to be presented with due permission of the Faculty of Electronics, Communications and Automation for public examination and debate in Auditorium AS1 at the Aalto University School of Science and Technology (Espoo, Finland) on the 10th of December 2010 at 12 noon.
Aalto University School of Science and Technology Faculty of Electronics, Communications and Automation Department of Automation and Systems Technology
Distribution: Aalto University Department of Automation and Systems Technology P.O. Box 15500 FI-00076 Aalto, Finland Tel. +358-9-470 25201 Fax. +358-9-470 25208 E-mail:
[email protected] http://autsys.tkk.fi/ ISBN 978-952-60-3460-7 (printed) ISBN 978-952-60-3461-4 (pdf) ISSN 0356-0872 URL: http://lib.tkk.fi/Diss/2010/isbn9789526034614
Aalto-Print Helsinki 2010
ABSTRACT OF DOCTORAL DISSERTATION Author Mikael Björkbom
AALTO UNIVERSITY SCHOOL OF SCIENCE AND TECHNOLOGY P.O. BOX 11000, FI‐00076 AALTO http://www.aalto.fi
Name of the dissertation Wireless control system simulation and network adaptive control Manuscript submitted 27.5.2010 Manuscript revised 7.10.2010 Date of the defence 10.12.2010 Monograph Article dissertation Faculty Faculty of Electronics, Communications and Automation Department Department of Automation and Systems Technology Field of research Control Engineering Opponents Prof. Matti Vilkko, Prof. Tapani Ristaniemi Supervisor Prof. Heikki Koivo Abstract With the arrival of the wireless automation standards WirelessHART and ISA100.11a, the use of wireless technology in the automation industry is emerging today. The main benefits of using wireless devices range from no cable and lower installation costs to more flexible positioning. When using next generation agile wireless communication methods in control applications, the unreliability of the wireless network becomes an issue, due to the real‐time requirements of control. The research has previously focused on either control design and stability for wired control, or network protocols for wireless sensor networks. A marginal part of the research has studied wireless control. This thesis takes a practical approach to the field of wireless control design. A simulation system called PiccSIM is developed, where the communication and control can be co‐simulated and studied. There already exists some simulation tools, such as TrueTime, but none of them delivers as flexible and versatile capabilities as PiccSIM for simulation of specific protocols and algorithms. PiccSIM is not only a simulation system: it consists of a tool‐chain for network and control design, and further implementation for real wireless nodes. A variety of wireless control scenarios are simulated and studied. The effects of the net‐ work on the control performance are studied both theoretically and through simulations to gain an insight into the communication and control interaction. Typical control design approaches in the literature are of optimal control‐type, with guaranteed stability given certain network induced delay and packet losses. The control design has been complicated and resulted in complex controllers. This thesis concentrates on PID‐type controllers, because of their simplicity and wide use in industry. To accommodate PID controllers to control over unreliable wireless networks, several adaptive schemes, which adapt to the network quality of service, are developed. This results in flexible, self‐tuning control that can cope with non‐deterministic and time‐varying wireless networks. The proposed adaptive control algorithms are tested and verified in simulations using PiccSIM.
Keywords wireless networked control systems, co‐simulation, network adaptive control ISBN (printed) 978‐952‐60‐3460‐7 ISSN (printed) 0356‐0872 ISBN (pdf) 978‐952‐60‐3461‐4 ISSN (pdf) Language English Number of pages 173 Publisher Aalto University, Department of Automation and Systems Technology Print distribution Aalto University, Department of Automation and Systems Technology The dissertation can be read at http://lib.tkk.fi/Diss/2010/isbn9789526034614/
SAMMANFATTNING (ABSTRAKT) AV DOKTORSAVHANDLING Författare Mikael Björkbom
AALTO‐UNIVERSITETET TEKNISKA HÖGSKOLAN PB 11000, FI‐00076 AALTO http://www.aalto.fi
Titel Simulering av trådlösa reglersystem och nätverksadaptiv reglering Inlämning av manuskriptet 27.5.2010 Korrigering av manuskriptet 7.10.2010 Datum för disputation 10.12.2010 Monografi Sammanläggningsavhandling Fakultet Fakulteten för elektronik, kommunikation och automation Institution Institutionen för automations‐ och systemteknik Forskningsområde Systemteknik Opponent(er) Prof. Matti Vilkko, Prof. Tapani Ristaniemi Övervakare Prof. Heikki Koivo Sammanfattning (Abstrakt) Användande av trådlös teknologi i automationsindustrin slår nu igenom tack vare de nya standarderna för trådlös automation: WirelessHART och ISA100.11a. De största fördelarna för att använda trådlösa apparater är saknaden av kablar med påföljande lägre installationskostnader och ökad flexibilitet. An‐ vändandet av den nästa generationens flexible trådlösa nätverk i reglerapplikation medför problem på grund av nätverkens opålitlighet och den realtidsprestanda som reglersystemet kräver. Forskningen på detta område har tidigare fokuserat på antingen reglerdesign och stabilitet av trådbundna reglersystem, eller på nätverksprotokol för trådlösa sensornätverk. En marginell del har studerat trådlös reglering. Denhär avhandlingen närmar sig problemen med ett praktiskt synsätt. Ett simulations‐system kallat PiccSIM utvecklas, där den trådlösa kommunikationen och regleringen kan simuleras och studeras samtidigt. Det existerar redan ett par liknande simulatorer, till exempel TrueTime, men ingen av dem är så flexible och mångsidig som PiccSIM, där simulation av specifika protokol och algoritmer är möjligt. PiccSIM är inte endast en simulator, utan består av flera verktyg för design av nätverk och reglersystem. Flera trådlösa reglersystem simuleras och studeras. Prestandan av de trådlösa närverken och deras verkan på reglersystemet studeras både teoretiskt och via simulationer för att förstå växelverkan mellan det trådlösa nätverket och reglersystemet. Ett typiskt tillvägagångssätt i litteraturen är optimal reglering, där regulatorn planeras enligt vissa för‐ dröjnings‐ och paketförlustspecifikationer. Detta resulterar i en komplex reglerdesign. Denhär avhand‐ lingen koncentrerar sig på PID‐typens regulatorer, för de är enkla och används omfattande i industrin. För att tillämpa PID regulatorer over opålitliga trådlösa nätverk utvecklas flera adaptiva reglermetoder, som anpassar sig själv till nätverkets prestanda. Resultatet är flexibla, självinställbara regulatorer, som fungerar trots det icke‐deterministiska trådlösa nätverket. De utvecklade adaptiva reglermetoderna testas och verifieras i simulationer med PiccSIM.
Ämnesord (Nyckelord) trådlösa reglersystem, co‐simulering, närverksadaptiv reglering ISBN (tryckt) 978‐952‐60‐3460‐7 ISSN (tryckt) 0356‐0872 ISBN (pdf) 978‐952‐60‐3461‐4 ISSN (pdf) Språk Engelska Sidantal 173 Utgivare Aalto Universitetet, Institutionen för automations‐ och systemteknik Distribution Aalto Universitetet, Institutionen för automations‐ och systemteknik Avhandlingen är tillgänglig på nätet http://lib.tkk.fi/Diss/2010/isbn9789526034614/
Anyone who has a Master’s degree can become a Ph.D. – but the persistent drive to discover new knowledge is essential.
PREFACE I started my research carrier at the former Control Engineering Laboratory at Helsinki University of Technology in 2003 as a summer trainee with prof. Heikki Koivo as my supervisor. The following summer I developed the MoCoNet platform, which later was extended to the PiccSIM platform. The MoCoNet platform became a part of my Master’s thesis, which I finished in 2006. Since the Master’s thesis, I have worked in the WiSA I and II projects (Wireless Sensor and Actuator Networks for Measurement and Control) where the PiccSIM Toolchain is a major contribution to the projects. My Licentiate thesis on PiccSIM was a convenient stepping stone for this Doctoral thesis, as it is now a part of the foundation of this thesis. My supervisor, prof. Heikki Koivo, has given me academic freedom in my re‐ search work. I have in other words developed my adaptive control algorithms completely myself. In the implementation of PiccSIM I have collaborated with Shekar Nethi from the Department of Communications and Networking, who has assisted with the network simulation part. Tuomo Kohtamäki has, under my guidance, done the hard work by implemented the Toolchain interfaces, which I am grateful for. Sofia Piltz did her Master’s thesis under my supervi‐ sion about the step adaptive controller. I thank her for her hard and careful work. For the simulation case studies I have received invaluable input and assistance from prof. Riku Jäntti, Shekar Nethi and Lasse Eriksson. Lasse Eriksson has also thoroughly read the thesis and given some excellent sugges‐ tions to improve it. I am very grateful for the countless hours of bedtime read‐ ing he has done. William Martin has done the proofreading with tireless detail and grammar improvements. I received the final comments from the pre‐ examiners associate prof. Anton Cervin and prof. Muhammed Elmusrati, of which the comments by Cervin were objective and insightful. The funding of the WiSA I‐II projects is from The Finnish Funding Agency for Technology and Innovation (TEKES), through the Nordite program. The re‐ search has been a collaboration between Nordic universities, in our case Kungliga Tekniska Högskolan (KTH) from Stockholm, Sweden. I have had the pleasure to visit Mikael Johansson at KTH for one month in May 2009, and many shorter visits later on. The research launched during the visit has contin‐ ued being fruitful. I appreciate the graduate student position I received at the Graduate School in Electronics, Telecommunications and Automation (GETA) in 2007. It enabled the freedom to solely work on one’s own subject, although
v
there has not been any situation where I have needed to exercise that freedom. The wireless measurements were done at facilities of Konecranes, for which I thank D.Sc. Timo Sorsa for allowing us to visit their industrial halls. I would additionally like to thank: Finnish Foundation for Technology Promo‐ tion, Emil Aaltonen Foundation, The Finnish Foundation for Economic and Technology Sciences ‐ KAUTE, Neles Oy:n 30‐vuotissäätiö (The 30th Anniver‐ sary Foundation of Neles), the Walter Ahlström Foundation, and the Oskar Öflund Foundation for the support I have received. I have also received several travel grants to conferences from The Automation Foundation and GETA. Finally, I thank my wife Susse for listening patiently to me, when I am trying to explain, in a simple way, things that she does not understand. The marriage left an impact on the contributed papers, as my family name changed. The name Pohjola was exchanged to the, index unfriendly, Björkbom. Espoo, October 2010 Mikael Björkbom
vi
TABLE OF CONTENTS Preface Table of Contents List of Publications by the Author List of Abbreviations
v vii xi xiii
List of Symbols
xv
1. Introduction
1
1.1. Objectives of the Thesis .................................................................................. 3 1.2. Contributions and Organization of the Thesis ........................................... 4 1.3. Background of Wireless Control .................................................................. 6 1.4. Wireless Control Systems and Simulation .................................................. 8 1.5. Research on Wireless Control Networks and Applications ................... 10 1.5.1. Wireless Networks for Control ............................................................. 11 1.5.2. Current Standards for Wireless Automation ...................................... 12 1.5.3. Wireless Sensor Networks ..................................................................... 14
2. Preliminaries – Networks and Controllers
17
2.1. The Networked Control Problem ............................................................... 17 2.2. General Assumptions ................................................................................... 18 2.3. Networked Control Structures ................................................................... 21 2.4. Network Models ........................................................................................... 24 2.4.1. Packet Drop ‐ Delay Jitter ...................................................................... 26 2.4.2. Drop and Delay Models based on Markov‐chains ............................. 28
2.5. Jitter Margin ................................................................................................... 31 2.6. The PID Controller in Networked Systems .............................................. 32 2.6.1. Tuning of PID controllers in Networked Control Systems ............... 32 2.6.2. The PID PLUS Controller ....................................................................... 34
2.7. Internal Model Control ................................................................................ 35
vii
2.7.1. Internal Model Control Design .............................................................. 36 2.7.2. IMC‐PID Controller Design .................................................................... 37
2.8. Network Quality of Service in Networked Control Systems ................. 38 2.8.1. Network Performance Considerations ................................................. 39 2.8.2. Network Congestion and Traffic Rate Control .................................... 40
2.9. Kalman Filtering in Networked Control Systems .................................... 42 2.10. Summary ................................................................................................... 44
3. Networks and Controllers in Practice
45
3.1. Measurements of Radio Environments ...................................................... 45 3.2. Estimated Gilbert‐Elliott Models ................................................................ 50 3.3. The Networked PID Controller ................................................................... 51 3.4. Internal Model Control in Networked Control Systems ......................... 53 3.4.1. Approximations of Closed‐loop Step Response .................................. 53 3.4.2. IMC Control and Jitter Margin .............................................................. 55 3.4.3. Sampling Interval and IMC Tuning for Jitter Margin ........................ 57
3.5. Effect of Network Quality of Service on Control Performance .............. 59 3.5.1. Network Cost for Control ....................................................................... 60 3.5.2. Simulations for Network and Control Performance Relationship ... 62
3.6. Summary ........................................................................................................ 64
4. PiccSIM – Toolchain for Network and Control Co‐Design and Simulation
67
4.1. Development of the Co‐simulation Platform ............................................ 68 4.2. Review of Networked Control System Simulators .................................. 69 4.3. PiccSIM Architecture .................................................................................... 75 4.3.1. Simulink and ns‐2 Integration ............................................................... 77 4.3.2. Data Exchange Between Simulators ...................................................... 78 4.3.3. Simulation Clock Synchronization ........................................................ 79 4.3.4. Other Implemented Features ................................................................. 80
4.4. PiccSIM Toolchain ......................................................................................... 82 4.4.1. PiccSIM Block Library ............................................................................. 83 4.4.2. Toolchain User Interfaces ....................................................................... 84
4.5. Remote User Interfaces ................................................................................. 88 4.6. Automatic Code Generation and Implementation ................................... 90 4.7. Simulation Case Studies ............................................................................... 91 4.7.1. Target Tracking Scenario ........................................................................ 92
viii
4.7.2. Robot Squad with Formation Changes ................................................ 95 4.7.3. Building Automation Scenario .............................................................. 98 4.7.4. Crane Control in an Industrial Hall ................................................... 102 4.7.5. PiccSIM Toolchain Demonstrations ................................................... 105
4.8. Summary ...................................................................................................... 109
5. Adaptive Control in Wireless Networked Control Systems
111
5.1. Adaptive Jitter Margin PID Control ......................................................... 112 5.1.1. Delay Jitter Estimation Simulations ................................................... 113 5.1.2. Adaptive Control Tuning Scenario Simulations .............................. 116 5.1.3. Summary ................................................................................................ 118
5.2. Adaptive Control Speed Based on Network Quality of Service .......... 119 5.2.1. The Adaptive Control Speed Scheme ................................................ 120 5.2.2. Changing the Sampling Interval ......................................................... 122 5.2.3. Analysis of the Adaptive Control Speed Algorithm ........................ 124 5.2.4. Simulation Scenario .............................................................................. 126 5.2.5. Summary ................................................................................................ 129
5.3. Step Adaptive Controller for Networked MIMO Control Systems .... 129 5.3.1. Controller Tuning by Optimization for MIMO Systems ................. 132 5.3.2. Step Adaptive Controller Tuning and Simulations ......................... 133 5.3.3. Summary ................................................................................................ 137
5.4. Steady‐State Outage Compensation Heuristic ....................................... 138 5.4.1. The Steady‐State Heuristic ................................................................... 139 5.4.2. Stability of the Steady‐State Heuristic ................................................ 142 5.4.3. Simulations and Comparisons ............................................................ 146 5.4.4. Summary ................................................................................................ 150
6. Conclusions
151
References
157
ix
LIST OF PUBLICATIONS BY THE AUTHOR Although this doctoral dissertation is a monograph, the results presented here are based on the following publications presented at international conferences or journals. [P1] Pohjola, M., L. Eriksson, V. Hölttä, and T. Oksanen, Platform for monitoring and controlling educational laboratory processes over Internet, in Proc. 16th IFAC World Congress, Prague, Czech Republic, 4‐8 July, 2005. [P2] Nethi, S., M. Pohjola, L. Eriksson, and R. Jäntti, Platform for emulating networked control systems in laboratory environments, in Proc. 8th International Symposium on a World of Wireless, Mobile and Multimedia Networks, Helsinki, Finland, 18‐21 June, 2007. [P3] Kohtamäki, T., M. Pohjola, J. Brand, and L.M. Eriksson, PiccSIM Toolchain – Design, simulation and automatic implementation of wireless networked control systems, in Proc. IEEE International Conference on Networking, Sensing and Control, Okayama, Japan, 26‐29 March, 2009. [P4] Nethi, S., M. Pohjola, L. Eriksson, and R. Jäntti, Simulation case studies of wireless networked control systems, in Proc. 10th ACM/IEEE International Symposium on Modelling, Analysis and Simulation of Wireless and Mobile Systems, Crete, Greece, 22‐26 October, 2007. [P5] Björkbom, M., S. Nethi, and R. Jäntti, Wireless control of multihop mobile robot squad, IEEE Wireless Communications, Special Issue on Wireless Communications in Networked Robotics, vol. 16, no. 1, February, 2009. [P6] Björkbom, M., S. Nethi, L. Eriksson, and R. Jäntti, Wireless control system design and co‐simulation, submitted.
xi
[P7] Pohjola, M. and H. Koivo, Measurement delay estimation for Kalman filter in networked control systems, in Proc. 17th IFAC World Congress, Seoul, Korea, 6‐11 July, 2008. [P8] Pohjola, M., Adaptive jitter margin PID controller, in Proc. 4th IEEE Conference on Automation Science and Engineering, Washington D.C., USA, 23‐26 August, 2008. [P9] Pohjola, M., Adaptive control speed based on network quality of service, in Proc. 17th Mediterranean Conference on Control and Automation, Thessaloniki, Greece, 24‐26 June, 2009. [P10] Piltz, S., M. Björkbom, L.M. Eriksson, and H.N. Koivo, Step adaptive controller for networked MIMO control systems, in Proc. IEEE International Conference on Networking, Sensing and Control, Chicago, USA, 11‐13 April, 2010. [P11] Björkbom, M. and M. Johansson, Networked PID control: tuning and outage compensation, in Proc. 36th IEEE Industrial Electronics Conference, Glendale, AZ, USA, 7‐10 November, 2010.
xii
LIST OF ABBREVIATIONS ACS AIMD AJM ANSI AODV CAN COTS CSMA DCF FDMA FOLIPD FOTD G‐E GUI HART HVAC IAE IEC IEEE IMC ISA ISE ISM ITAE ITSE KF LAN LMI LMNR MAC MIMO MoCoNet NCC NCS
Adaptive Control Speed Additive Increase, Multiplicative Decrease Adaptive Jitter Margin American National Standards Institute Ad Hoc On‐demand Distance Vector Controller Area Network Commercial Off The Shelf Carrier Sense Multiple Access Distributed Coordination Function (MAC protocol for WLAN) Frequency Division Multiple Access First Order Lag Plus Integral Plus Delay First Order Time‐Delay Gilbert‐Elliott Graphical User Interface Highway Addressable Remote Transducer Heating, Ventilation and Air Conditioning Integral of Absolute Error International Electrotechnical Commission Institute of Electrical and Electronics Engineers Internal Model Control International Society of Automation Integral of Square Error Industrial, Scientific, and Medical (frequency band) Integral of Time weighted Absolute Error Integral of Time weighted Square Error Kalman Filter Local Area Network Linear Matrix Inequality Localized Multiple Next‐hop Routing Medium Access Control Multiple‐Input Multiple‐Output Monitoring and Controlling Educational Laboratory Processes over Internet Network Cost for Control Networked Control System
xiii
NS‐2 OPNET PiccSIM PID RTE SAC SISO SSH TCL TCP TDMA TLC TOSSIM TSMP UDP WNCS WLAN WSAN WSN QoS QPT ZOH
xiv
Network Simulator version 2 Optimized Network Engineering Tool Platform for Integrated Communications and Control design, Simulation, Implementation and Modeling Proportional‐Integral‐Derivative Real‐Time Ethernet Step Adaptive Controller Single‐Input Single‐Output Steady‐State Heuristic Tool Command Language Transmission Control Protocol Time Division Multiple Access Target Language Compiler TinyOS Simulator Time Synchronized Mesh Protocol User Datagram Protocol Wireless Networked Control System Wireless Local Area Network Wireless Sensor and Actuator Network Wireless Sensor Network Quality of Service Quantitative Parameter Tuning Zero Order Hold
LIST OF SYMBOLS α β γ δ δmax θ λ π , πG , πB
Weighting factor Filtering factor Time‐constant of discrete‐time filter Delay jitter Jitter margin (maximum allowed delay jitter) Markov‐chain jump parameter IMC tuning parameter, closed‐loop system time‐constant distribution Markov‐chain steady‐state probability, Good and Bad state of Gilbert‐Elliott model σ , σ D , σGE Standard deviation, of data, of Gilbert‐Elliott model σnorm Normalized standard deviation σNCC Network cost for control fairness measure σtot Total standard deviation, on several time‐scales τ Process delay (without network induced delay) ω Angular velocity Γc Controller input matrix Φc Controller state‐transition matrix Χ Stochastic process a Controller gain parameter b Set‐point weighting c Update step scaling factor of adaptive control speed algo‐ rithm Cost scaling factor cJ cv Coefficient of variation d Delay of packet d Delay difference (jitter) df Time‐constant of discrete‐time derivative filter dG , dB , dGE Packet drop probabilities of Gilbert‐Elliott model, Good state, Bad state, average dmax Maximum delay before control is switched to stop mode e, eΣ , eΔ Control error, integral of error, derivative of error ehold Error signal value hold constant during network outage f Frequency f(k) Filter for PID PLUS g Time‐constant of steady‐state heuristic h, hbase Sampling interval, base sampling interval
xv
i j k ks m m(k) maxcross n pGG , pBB pGB , p BG pdrop pij r, rd, rtot, rmeas Δr s t, t(k) tn u, uhold, uol
uD x, xKF , xs, xc y, yhold, yol, ys yin, yout yr y Δy z A, B, C, D
Adrop D D(z) Df Dhist Dload G, Gp G − , G +
xvi
index Imaginary unit or index Discrete time‐index Time‐index for switching of controller Time‐scale Relative update speed of adaptive control speed scheme Maximum constraint for cross‐interaction integer, order of IMC filter State‐holding probabilities for Gilbert‐Elliott model State‐transition probabilities for Gilbert‐Elliott model Packet drop probability Markov‐chain state‐transition probability Packet drop, desired, total, and measured packet drop Velocity of adaptive control speed algorithm Laplace‐transform variable Continuous time, discrete time‐instant Time‐instant Control signal, signal value hold constant during network outage, and control signal of open‐loop system Derivative part of control signal Process, Kalman filter, sensor, controller state vector Process output, signal value hold constant during network outage, and output during open‐loop control, sensor output Input and output signal of network Control reference signal Difference in output Change in process output Process output measurement vector State‐space matrixes, state‐transition, input, output, and di‐ rect terms. Xc: controller, Xc,drop: controller during packet drop, Xp: process, Xs: sensor State‐space transition matrix for whole system during packet drop Vector of delays Denominator of discrete‐time controller Time‐constant of derivative filter Histogram of consecutive drop lengths Load disturbance Process transfer function Invertible, non‐invertible part of transfer function
Gc Gcl Gf GIMC Gm Hc Jδ ,est Jtot JIAE , JITAE , JISE , JITSE J NCC K, Km Kp, Ki, Kd, KKF L LN N N(z) Nd Nh Nmax NM P P Q R T, Tm, Tf Ti, Td Tout TGE Tr TW ΔT L Pr U
Controller transfer function Closed‐loop transfer function Low‐pass filter Internal model control transfer function Process transfer function model Controller output matrix Delay jitter estimation cost function Total cost function of MIMO process Integral error cost functions Network cost for control measure Process gain, process model gain PID controller proportional, integral, and derivative gain Kalman gain Process time‐delay (including constant minimum network induced delay) Time‐delay of network Number of Numerator of discrete‐time controller Derivative filter constant of discrete‐time PID controller Sampling instants per rise‐time Jitter margin in terms of sampling intervals Number of states in Markov‐chain Kalman filter state covariance matrix Markov‐chain state‐transition matrix State covariance matrix Measurement covariance matrix Time‐constant of process, process model, low‐pass filter Integration, derivation time of PID controller Length of network outage State‐residence time of Gilbert‐Elliott model Rise‐time Time‐window Difference in time Laplace operator Natural number Probability Uniform random distribution
xvii
1. INTRODUCTION The use of wireless networks in control applications, so‐called “wireless auto‐ mation”, is an emerging application area [50], [110], with the possibility to revo‐ lutionize the automation industry [16]. The primary benefit of wireless control technology is reduced installation cost, as a considerable investment is made in the wiring of factories, both financially and in labor. The use of wireless tech‐ nology is not only a replacement of cables; the benefits go beyond that. With wireless devices, increased flexibility is gained as sensors can be placed more freely, even on rotating machines. Robustness is increased, as the communica‐ tion can be done over several paths in a mesh network and failure of cables is eliminated [155]. Finally, there are the opportunities for new applications that are enabled by wireless control. Some existing or emerging applications are remote control of devices, for example cranes or dexterous and mobile robots, mobile applications, and wireless monitoring of large plants for fault detection, maintenance, production quality monitoring, and compliance to environmental regulations [59]. There is a strong aim [156] to develop and deploy wireless networked control systems (WNCS), where a control system communicate over a wireless net‐ work, in factory and home automation [9], [40], [50], [59], [82], [163]. In a related field, sensor network applications have as well received much attention [2], [11], [158], [176]. Today, wireless automation technology is mostly applied in monitoring applications, because in these applications the network require‐ ments in terms of real‐time performance are low. The industry is cautious to apply wireless to closed‐loop control, due to the unreliability issues of wireless networks. In general the current research on this subject is consequently aiming on deterministic wireless control. In addition to the technological and research interests, the simulation of WNCSs is important and necessary for several reasons. The current networked control system (NCS) research need to be complemented by simulation to assess the validity and practical benefits of the developed theory and algorithms. The applicability of the developed algorithms must be evaluated in practical case studies. Simulations are a feasible way to test and assess the network and con‐ trol strategies and theories for WNCSs before deployment. With simulations, problems occurring in the network and the resulting performance of the control
1
algorithms to these issues can be studied. The critical properties and behavior of the network, and the impact on the control system can be analyzed. Especially the interaction between the network and the control system must be further understood, and the practical impact must be studied by simulation. These issues, in particular the protocol specific ones, are hard to approach analytically. Simulation studies will, hopefully, unravel these matters and lead to a coherent theory, best practices knowledge, and design expertise of WNCSs. This thesis focuses on simulations of WNCSs and controller adaptation based on the wireless network quality of service. The aim is at closed‐loop control over an unreliable network, where the control system adapts to the network uncertainties. The network uncertainties can be due to fading and interference of the wireless communication, or the non‐determinism of the network proto‐ cols, and varying demands of the application. The unreliability refers thus to the non‐determinism and non real‐time operation of the network. When starting to work on the thesis, the questions that immediately arose were: How does the quality of service (QoS) of the wireless network change? How does that affect the control system? What should the control system compensate for? How should it compensate for the changes in the network QoS? The inves‐ tigations of these issues started by the development of the communication and control co‐simulator PiccSIM. The currently available simulation tools for WNCSs are few or limited in simu‐ lation capabilities. Most of the available simulators concentrate on either the network or control part. At the moment there exist only a couple of co‐ simulators, where both the network and control system are properly addressed. The PiccSIM simulator, presented in Chapter 4, is an attempt to remedy this situation, with a complete set of modeling, design and simulation tools. The initial simulation case studies presented in Section 4.7 give some insight on how the communication and control layers interact. With PiccSIM the controller adaptive part of this thesis can be addressed. The main impact of the wireless network on the control system is the limited band‐ width and non‐determinism, causing communication delay jitter and packet losses. The adaptive control schemes developed in Chapter 5 deals with these issues. The adaptive control algorithms are not adaptive in the traditional sense that they adapt to the changes in the process [184], but rather to the network conditions. The controllers are not necessary continuously updated as tradi‐ tionally in adaptive control, but only when compensation of the network condi‐ tions requires it. Thus, the control system is be flexible in compensating for the problems in the network. The adaptive schemes are ultimately verified by simu‐ lation on PiccSIM.
2
1.1. Objectives of the Thesis When the subject of the thesis was first envisioned, the premise was that in a WNCS, an unreliable network is used where the QoS will change over time due to the inherent uncertainties of the wireless communication, changes in the environment, and non‐deterministic network protocols. The solution would be to develop agile control algorithms that are flexible, self‐tuning, and adaptive to compensate for the deficiencies of the wireless communication. The field of WNCSs is cross‐disciplinary: both the network and the control system need to be taken into account. Traditionally, either the network or the control system has been studied separately. As such there has been little re‐ search focusing on both aspects at the same time. The stability of NCSs has received plenty of attention in the literature [23], [72], [178], [74], [103], [160]. Little is said about the practical implementation, behavior, and performance of the control systems. Many of the stability proofs or controller design methods are cumbersome, for instance [74], and if all the network related problems are to be taken into account, the proofs become complicated [103]. This thesis aims at simplicity, giving a practical viewpoint to WNCS operation through the simulation cases and implementation. Practical controller design methods that likely will be applied and implemented on real WNCS applica‐ tions are employed. Easy implementation is facilitated by using proportional‐ integral‐derivative (PID) controllers and internal model control (IMC) design. The PiccSIM simulator, described in Chapter 4, is merely a tool to test the de‐ veloped adaptive networked control algorithms presented in Chapter 5. The scientific contribution in this thesis is the developed adaptive control algo‐ rithms for WNCSs. The aim of this thesis is not state‐of‐the‐art WNCS control performance and stability proofs, but giving more insight into the general ten‐ dencies of WNCSs and practical implementation. Wireless networks are inherently non‐deterministic, and no network design can make it fully dependable, because of interference in the open communication media. If for instance an industrial standard WirelessHART type network is used, the network performance can largely be considered deterministic, and the research deals with communication and controller scheduling [137], [160]. In‐ stead of trying to make the network completely deterministic, which ultimately will fail, an alternative is to accept the network‐related problems and use a cheap, but unreliable, network based on ZigBee or similar commercial off‐the‐ shelf (COTS) technology. In return, the robustness of the control system to cope with these deficiencies needs to be improved. In this approach, wireless control can be applied in the automation industry and other applications, without us‐ ing, possible expensive, industrial grade hardware. Increasing the control robustness against the network uncertainty can be done, for instance by controller tuning [47]. The idea of changing the controller is in
3
this thesis taken further. Several adaptive control schemes or heuristics are developed in Chapter 5 that compensate for the unreliable and non‐ deterministic network in a WNCS. The objective of this thesis is thus to develop control systems that work even if problems arise in the network. The developed adaptation schemes addresses several different situations that arise in a WNCS: the self‐configuration or self‐tuning of the controllers depend‐ ing on the network characteristics; the adaptation of control aggressiveness and generated network traffic for control according to the network congestion; the change of tuning in multiple‐input multiple‐output distributed control systems; and a heuristic to overcome network outages. All the developed algorithms are tested with PiccSIM, with promising results.
1.2. Contributions and Organization of the Thesis There are many research topics in the field of WNCSs and sensor networks, such as hardware, sensor and energy technology, network protocols, software, middleware, and control algorithms. In this thesis little or nothing is said about the hardware, lower level layers, and protocols, such as radio, medium access control, bandwidth allocation, controller scheduling, and security. The focus in this thesis is on WNCS simulation and design, and adaptive control algorithms for WNCSs. The main contributions of this thesis are the development of the simulation platform PiccSIM for communication and control co‐simulation, including the user interfaces, the case study simulations done with the simulator, and the adaptive control algorithms for WNCSs. PiccSIM is released as an open source package and it is free for use [127]. The contributions are summarized in the following list: Development and implementation of a simulation platform for communi‐ cation and control co‐simulation and design. - Development of communication and control co‐simulator PiccSIM for wireless control systems. - Development of PiccSIM Toolchain for integrated networked control system design with PiccSIM, including network design, control tuning tool and simulation graphical user interfaces (GUI)s. - Integration of additional propagation models to the network simulator ns‐2 for more realistic simulation of wireless net‐ works with data based radio environment models. - Implementations and case studies of several different scenarios simulated on PiccSIM. Simulations of all the adaptive control‐
4
lers developed in this thesis. Results give new insights into the behavior of networked control systems. - Development of remote access for PiccSIM for educational re‐ mote laboratory experiments and for researchers around the world. - Automatic code generation from Simulink model block dia‐ gram for implementation on Sensinode wireless nodes, with two demonstration cases. New concepts and algorithms for networked control systems. - Network cost for control, relating network quality of service to quality of control. - IMC‐PID design for networked control systems. - Networked PID controller, a distributed version of the PID con‐ troller. - Method for online changing of controller sampling interval without bumps. Development and simulation of several adaptive controller algorithms for networked control. - Adaptive control tuning based on network delay jitter. - Adaptive control speed and sampling interval based on net‐ work congestion. - Adaptive MIMO control based on step response and load dis‐ turbance rejection. Selection of cost function for controller pa‐ rameter optimization in a decentralized MIMO control scenario. - Control heuristic and compensation during network outages. The contents of the thesis are based on the work presented in the papers [P1]‐ [P11], done in cooperation with the co‐authors. The thesis can be divided into two parts. The first part deals with practical control system design for wireless control systems. Chapter 2 gives the preliminaries of the thesis. Chapter 3 in‐ troduces some results regarding WNCSs related to network performance mea‐ surements and evaluation, and control design. Chapter 5 treats different kinds of adaptive control algorithms [P8], [P9] or heuristics [P10], [P11] for wireless control systems. Minor contribution related to this area can also be found among the control theory preliminaries in Chapter 2. The other half of the thesis deals with the development of the PiccSIM simulator and the PiccSIM Toolchain in Chapter 4 [P1], [P2], [P3], [P6]. A survey of related simulators is given in Section 4.2. Because the PiccSIM platform has evolved over the years and a considerable amount of simulations have been done, Chap‐ ter 4 concentrates on giving a whole, up‐to‐date, view of the platform and a coherent presentation of the simulations and the results. Some illustrative simu‐ lations are additionally carried out with PiccSIM in Section 4.7, where different
5
simulation scenarios are considered ranging from building automation, mobile robot control, to wireless process control [P4], [P5], [P6]. The main work of the author is the development of PiccSIM and the implemen‐ tation of the simulation cases in Chapter 4, the practical control results in Chap‐ ter 3, and the network adaptive control algorithms in Chapter 5. The co‐authors of the related papers have mainly been involved in planning the simulation cases and writing the publications. In addition, Shekar Nethi has in particular developed the ns‐2 part of PiccSIM, made the wireless measurements in Section 3.1, and assisted in the simulations. Jenna Brand has developed the wall‐fading model in Section 4.3.4. Huang Chen from Vaasa University of Applied Sciences has implemented the ns‐2 configuration tool presented in Section 4.4.2, with further development by Tuomo Kohtamäki, who has also implemented the PiccSIM user interfaces and the simulator time‐synchronization and data‐ exchange mechanisms. Sofia Piltz has executed the simulations in Section 5.3. Kohtamäki and Piltz have done the work under the supervision and co‐ development of the author. The author has made the field overview and litera‐ ture survey in Chapters 1 and 2, and developed the theory in Chapter 3. The organization of the thesis is the following: in Chapter 2 the preliminaries used in the later chapters are established. Most notable is the jitter margin tun‐ ing and PID controllers, Sections 2.5 and 2.6, and the IMC design framework in Section 2.7, which are used in several of the adaptive control schemes. In Chap‐ ter 3, new results regarding WNCSs are presented. Measurements of packet drop and estimated network models are shown. The application of IMC design in NCSs is analyzed. A novel network QoS measure for NCSs, based on packet drops, and the corresponding effect on the control systems is presented in Sec‐ tion 3.5. The proposed network cost for control measure correlates with the resulting obtainable control performance, and hence gives a good network design objective for WNCSs. The network and control co‐simulator PiccSIM is introduced in Chapter 4, including the technical details and the PiccSIM Tool‐ chain, Sections 4.3‐4.6, and some simulation results which point out special characteristics of WNCSs in Section 4.7. In Chapter 5, the adaptive control algo‐ rithms and heuristics are developed. The adaptive schemes are presented in separate sections, with the simulations, results, and conclusions obtained with PiccSIM. The thesis is finalized with conclusions in Chapter 6.
1.3. Background of Wireless Control One of the first real wireless control systems can be traced to the US patent no. 613809 by Nikolai Tesla, which was filed on 1st of July 1898. The patent named “Method of an Apparatus for Controlling Mechanism of Moving Vehicle or Vehicles“ described how to remotely, without mechanical devices or wires control a boat by switching either on, off, or hold the state of electrical motors. In one demonstration Tesla remotely controlled a boat from 18 miles away on
6
the Isle of Wight [57]. The design was improved by Leonardo Torres‐Quevedo in 1903 (patent in Spain) with his Telekino, which introduced multiple states and codewords to control multiple devices (up to 19) of different types [125]. Later, Torres‐Quevedo envisioned implementing the same technology on tor‐ pedoes. He had additional plans to apply the Telekino to remote control dirigi‐ ble balloons and planes (because test flying was dangerous), but lack of funding made him abandon the development of his inventions. The few early remote control applications used analog commands and the me‐ chanism of radio controlled electromechanical escapement, similar to the “Tesla boat” or the Telekino of Torres Quevedo. In the 1960s remote control developed drastically with transistor based radios and multi‐channel communication, which allowed the simultaneous control in several control dimensions. An example is the control of the pitch, yaw, and motor speed of a remote controlled model plane. The space age drove the technology forward, dictated by the need to get data from the spacecraft (telemetry) or send commands to it (remote control). The first packet based radio network, ALOHANET, was deployed in 1971 for the University of Hawaii [57]. The industrial applications started also to emerge, as more information to separate devices could be communicated. In the beginning the wireless communication used proprietary protocols. The first widespread industrial applications emerged in the 1980s when remote con‐ trolled switchyard locomotives and cranes appeared. At that time proprietary devices working on standardized radio communication protocols were devel‐ oped [140], [150]. The wireless local area network (WLAN) operating on the Industrial, Scientific, and Medical (ISM) radio band started to be developed in 1985, which later be‐ came generally accepted by the IEEE 802.11 standard, which solved the limita‐ tions of the previous implementations [57]. Wireless digital communication developed in the early 1990s for cellular phones. Nowadays coded pulse width modulation or pulse‐code modulation are used for planes and similar remote controlled toys. Some more advanced model plane remote controls use the license‐free ISM radio band at 2.4 GHz. At the moment the standardization of digital wireless communication and protocols suitable for industrial control systems, such as IEEE 802.15.4 “ZigBee” [180], have sparked the field and new interoperable devices from different vendors are emerging [16]. These advances have enabled the use of cheap and ubiquitous devices for wireless automation of today and wireless devices are currently starting to be applied for wireless automation applications. The development from fieldbus based automation systems to networked systems, such as real time Ethernet (RTE), and in the near future to wireless networks is described in [50].
7
1.4. Wireless Control Systems and Simulation In a networked control system, sensors, controllers and actuators are connected with a computer network [9]. The standard approach in automation is to use a fieldbus, which connects all the devices through a shared network. One of the benefits of NCSs is reduced cabling cost [115], which is removed completely by the introduction of wireless devices. Other advantages include ease of adding field devices, introducing two‐way communication with field devices for re‐ mote configuration, device status, diagnostics and health monitoring, and uti‐ lizing more advanced control strategies because of improved field data [59], [115]. The cheap and proven technology from office environment is being applied to automation. Ethernet networks are becoming regularly used and have to some extent replaced fieldbus technology in control applications [110]. The “Industri‐ al Ethernets” or RTE [36], [115], which allow for real‐time operation, where an operation is guaranteed to be executed in a given time, are gradually being applied. The same benefits are also available by means of wireless technology, with the addition of accessing the data wirelessly using a handheld device, enabling in‐situ inspection of the process [19]. The terms wireless networked control system or wireless sensor and actuator network (WSAN) refer to a control system, which communicates over a wireless network. These systems deliver more benefits in terms of flexibility and cost compared to NCSs as there are no wires, but also more problems, mainly be‐ cause of the open air and shared communication medium. The general conven‐ tion to distinguish between these two terms is related to the background of the researchers working in this field. WSAN refers to a wireless sensor network (WSN) [11] with the addition of actuators, where a WNCS is more aimed at wireless industrial automation. The former is rooted in the networking area and is more ad‐hoc, redundant and tolerates failures in the system, whereas the latter comes from the control area and is designed for high reliability and de‐ pendability. An overview of NCSs can be found in [9] and [65]. The benefits of NCSs are that cabling is reduced, similarly as using an automation fieldbus, and cheaper of‐ fice grade hardware is utilized [21]. The general development and philosophy of networked control systems is presented in [21] and [50]. There are many technological and social obstacles to using wireless networks in control. The main concern against deploying wireless networks for control is the uncertainty of communication, co‐existence with other wireless networks [50] and security. The inability to guarantee a sufficient quality of service for the control system is a real concern. Control engineers are hesitant to apply technology that cannot be trusted, since failure in control can cause physical damage. The network must therefore provide real‐time and constant operation [110]. This required
8
real‐time operation may not always be guaranteed, which causes problems for the control system design [93]. This thesis tries to show through simulations that hard real‐time operation is not necessarily needed in practical applications. Soft real‐time operation is enough, if it is taken into account in the control de‐ sign, for instance through adaptation. Another concern hindering the adoption of wireless technologies is security, since the wireless medium is open for eavesdropping and interference [112]. WNCSs are in essence non‐deterministic, stochastic and asynchronous systems, which are difficult for traditional control theory, where constant sampling in‐ terval is assumed, cf. the Z‐transform. Therefore simulators for NCSs are needed, where the asynchronism and issues related to the network and control interaction can be studied. Uniform packet loss or analytical delay distributions are usually used in networked control design. These assumptions do not neces‐ sary hold in practice. Simulation of WNCSs with specific network protocols is thus needed. Therefore the network and control co‐simulator PiccSIM is devel‐ oped in this thesis. The strength of PiccSIM is to enable one to quickly test sev‐ eral control algorithms in realistic WNCS scenarios [P2]. With the automatic code generation capabilities, the algorithms can further be tested easily in real applications [P3]. There are already some suitable simulators for WNCSs, such as TrueTime [22] and Modelica – ns‐2 [17], reviewed in Section 4.2. PiccSIM integrates two simu‐ lators to achieve an accurate and versatile simulation system at both the com‐ munication and control level for WNCSs. It has the unique feature of delivering a whole chain of tools for network and control modeling and design, integrated into one package with communication and control co‐simulation capabilities. By combining the design and simulation of WNCSs into one tool, a flexible, integrated, and powerful co‐simulation platform for research is obtained [P3]. With PiccSIM, the specific characteristics of WNCSs can be studied by simula‐ tions, as is done in some example simulations presented in Section 4.7. The algorithms developed in this thesis are aimed at future agile wireless con‐ trol systems, either in the industry or consumer applications. The adaptive control algorithms are designed to work when using a non‐deterministic net‐ work for control system communication. The network used would either be classified as an office network or a WSN/WSAN. The target applications are process control as opposed to discrete factory automation. Typical usages are stable processes in the industry, toys and home applications, or in the society related to ubiquitous applications. Examples of home applications are building automation, remote controlled radio cars and robots. In a ubiquitous computing future, the applications would be diverse. The initial industrial applications would be such that by adding a cheap wireless control system, additional value would be obtained from the assistance of this secondary control. Nothing pre‐ vents the use of cheap wireless control in the future for a whole plant, provided
9
that it is stable and non‐critical. In critical and unstable industrial processes, special industrial networks and protocols, which can deliver deterministic real‐ time performance, are recommended.
1.5. Research on Wireless Control Networks and Applications The wireless roadmap developed by the RUNES project, with the needed tech‐ nological and social development for the adoption of wireless technology in automation, is summarized in [80]. A comprehensive overview of current tech‐ nologies, future issues, and research topics of wireless industrial networking is given in [59] and [165]. Several wireless standards are presented and the anti‐ cipated promising research topics are introduced. Some of them are: network architecture and scalability, network standards, quality of service measures, provisioning and analysis of wireless industrial networks, real‐time and relia‐ bility, security, and energy efficiency. Another source of information on indus‐ trial wireless control is the report [46], where the whole field is reviewed start‐ ing from wireless communication to control issues and theories, and finally simulation tools. The wired NCS case with similar MAC, QoS and other issues as the wireless case, is discussed in [110]. There are many other papers giving an overview of the current wireless tech‐ nologies and networks for control, e.g. [59], [69], [124], and [163]. Gungor re‐ views the challenges, design goals, and technical solutions for industrial wire‐ less sensor networks [59]. Willig [163] discusses several properties and chal‐ lenges of using wireless in real‐time control applications. Some of the network related issues are: interference, path loss, timing and timeliness, co‐existence of other wireless networks, and connection to an existing wired automation sys‐ tem. Pellegrini [124] discusses the requirements and features for using wireless at the device level in an automation system, including power consumption, security, and connection to the wired control system. The necessity of wireless protocols aimed specifically at control applications is also pointed out. Wireless communication can be applied in many control applications in process control and factory automation. The first benefit is the reduced wiring and installation costs [19]. The savings naturally increase with increasing plant size such as oil refineries and with increasing number of sensors. Use of wireless technologies in automation enables one to more freely place sensors in a factory and even in places where it previously was expensive or impossible, such as explosive environments and rotating devices. Industrial robots will also become more agile, as the wires are removed [150]. New applications using wireless communication will emerge, such as mobile applications.
10
1.5.1. Wireless Networks for Control Wireless networks for control applications are currently envisioned to use stan‐ dard existing wireless devices such as Bluetooth, ZigBee (based on IEEE 802.15.4 radio) [11], and WLAN (IEEE 802.11). The wireless network design problems are presented for instance in [82]. Traditional computer networks, such as Ethernet and WLAN, use carrier sense multiple access (CSMA) type medium access control (MAC) with exponential back‐off in case of collisions. Several MAC‐types are compared and their suitability for control purposes are evaluated in [25], where, among the compared protocols, the CSMA‐type was found to be the best because of the immediate transmission opportunity. This result does not hold in high traffic conditions where collisions triggers back‐ offs, which were not taken into account in [25]. The non‐deterministic exponen‐ tial back‐off of the default CSMA protocol is not suitable for wireless control applications, since the communication delay, which is important for the control stability [23], cannot be bounded and packet drop due to congestion decreases the performance [96]. The current preferred solution is to use deterministic networks, using polling (e.g. Bluetooth) or scheduling (WirelessHART and ISA100.11a). Wireless networks are already used for control. Some early adoptions of wire‐ less devices as cable replacements are listed in [80]. The first wireless deploy‐ ments have been mostly cable replacements using Bluetooth. Bluetooth has, however, given way to ZigBee, as ZigBee has lower power consumption and more flexible networking. An overview of ZigBee/IEEE 802.15.4 can be found in [11]. ZigBee has rightfully been criticized for being unreliable, lacking tech‐ niques to mitigate the communication problems, and unsuitable for industrial control [88]. ZigBee is more suitable for small applications, and there are sepa‐ rate industrial standards for wireless automation. Using standard wireless hardware for automation is considered in [124], where two application layer protocols suitable for real‐time control are designed and evaluated. In the current wireless automation applications, the radios typically operate in the open ISM frequency band. The ISM band is quite crowded, as also the office networks (WLAN, Bluetooth) operate at the same frequencies. In the future, a separate frequency band could be reserved world‐wide exclusively for indus‐ trial automation applications, to enable proper, interference free wireless con‐ trol operation. The use of heterogeneous networks spanning the whole automation system from low level devices to high level functions, such as production monitoring, is considered in [115] and [110], where the applicability of different networks at the different levels and tasks are evaluated. For the higher level functions, such as plant monitoring and production planning, trend analysis, or gathering of batch information, real‐time operation is not necessary, and office grade wire‐ less networks are suitable for these tasks. In the current wireless automation
11
standards, only device level wireless networks, where sensor devices report their measured values and possible health data to a gateway and the rest of the automation system, are considered. The network is thus used only at the lowest device level in the whole automation system [150]. In practice, also plant wide wireless networks with proprietary protocols based on the office grade IEEE 802.11 standard are used. Despite the wireless communication, the devices may still have wired power, because of large power requirements of the sensor or, more often, the actuator. For truly wireless devices, the power source must be local. A battery contains a finite amount of energy, and thus either the device lifetime is limited, or energy must be gathered during operation from the environment with energy harvest‐ ing techniques. Sources of auxiliary energy are for example electromagnetic waves, light, vibration, or temperature differences [123]. Another solution to completely get rid of cables is wireless power transportation. An existing solu‐ tion is inductive power transfer to devices located inside a cage [140]. The cage walls induce a rotating magnetic field that solenoids in the devices convert to current. Typical power transfer ranges from 10 to 100mW [150].
1.5.2. Current Standards for Wireless Automation Currently, there are two standards for industrial wireless automation applica‐ tions: WirelessHART and ISA100.11a. Both industrial standards are based on the IEEE 802.15.4 radio [180]. The IEEE 802.15.4 standard is suitable for building automation [76], industrial monitoring, and control applications [40], [161]. The main characteristics are low bit rate and low power consumption. The Wireless‐ HART standard and some implementation details are discussed in [148]. ISA100.11a is in practice very similar to WirelessHART, as both have similar design goals and use the same radio, but the two standards are not compatible. The WISA system is a complete solution for a reliable wireless cell in industrial manufacturing [140]. The architecture of both industrial wireless network standards include sensor nodes, wireless routers communicating with each other, and a gateway, which is connected to the automation fieldbus and the rest of the automation system. Mesh networking is possible for reliability, but all communication between devices in the wireless network is routed via the gateway. This routing con‐ straint makes the network scheduling and routing design easier. WirelessHART was approved by the International Electrotechnical Commission (IEC) as a full international standard (IEC 62591Ed. 1.0) in March 2010. Several manufacturers have released devices for WirelessHART and it is by now in use in control applications [166]. The ISA100.11a standard [70] was published in September 2009, gained IEC approval in 2010. Hence, the field of industrial wireless control has taken its first steps. The standards are designed for deter‐ minism, such that traditional control can readily be applied. Although deter‐
12
minism is the main design goal, this is never fully assured and is on the expense on performance and flexibility. WirelessHART uses a combination of time division multiple access (TDMA) and frequency division multiple access (FDMA) MAC protocol. The TDMA slot is 10 ms, in which the data packet with sensor or control information and an acknowledgement are exchanged between two nodes. The network and trans‐ port layers are based on the Time Synchronized Mesh Protocol (TSMP) original‐ ly developed by Dust Networks [155]. Each node pair is assigned a unique time/frequency slot for contention free communication by a centralized network manager [155]. Some slots can be reserved for contention based access using CSMA, for communicating rare event messages or retransmissions in case of dropped packets. Additionally, frequency hopping is used to mitigate interfe‐ rence on some channels. A more detailed presentation of WirelessHART can be found in [148]. The benefits of WirelessHART and how to accommodate the control system to the wireless network, and meet the required control perfor‐ mance, are discussed in [117]. ISA100.11a uses similar techniques and both network standards can be applied where the application can tolerate a delay jitter in the order of 100 ms. The delay jitter stems from packet drop due to interference. The scheduling and routing of the WirelessHART and ISA100.11a networks are left open in their standards. Due to the determinism of the TDMA approach with a pre‐determined schedule, fixed bounds on the communication can be advertized, although not guaranteed. In the case of packet drops, retransmis‐ sion is needed, which may cause the information to exceed the delay bound. Retransmission slots must thus be incorporated into the schedule, which reduc‐ es the bandwidth usage and unavoidably introduces delay jitter. Retransmis‐ sion can take place on the slots allocated for random access, or on extra slots allocated in the schedule. The schedule and retransmissions determine when information is available to the control system, and hence affect the control oper‐ ation. There exists work where the actual network MAC protocol and related functions such as duty‐cycle [102], or routing and schedule [137], [160] are taken into account in the control stability proof. The current standards are designed for reliability and are thus conservative, which implies that closed‐loop control of fast processes is not possible. The design decisions of both standards ensure a relatively simple network design. The use of TDMA ensures determinism (disregarding packet drop due to inter‐ ference) and the routing via gateway constraint results in a simpler routing design. Current research related to the standards is for instance the optimality of the time/frequency‐slot scheduling and routing [160]. The room for im‐ provement is thus limited. The future research issues therefore include new technologies and algorithms to advance the capabilities of wireless control. The introduction of new agile and
13
intelligent communication methods will improve the field. These new networks will probably not guarantee a certain QoS or be deterministic, such as the case when using TDMA. One research direction is then the introduction of adaptive control methods to compensate for the deficiencies of the wireless communica‐ tion, which this thesis focuses on. In the future, wireless control systems with low performance requirements are likely to emerge. These can be based on commercial off‐the‐shelf hardware, by adopting robust control algorithms. Today’s COTS hardware, such as WLAN, Bluetooth, and IEEE 802.15.4, utilizes mostly CSMA type communications [69]. This implies that the network is inherently non‐deterministic and unreliable. There are no quality of service guarantees, such as designated transmission slots. This does not mean that wireless applications on this hardware are im‐ possible; it is rather a research opportunity. Several practical applications can be proven to work satisfactorily, using simulations and pilot implementation.
1.5.3. Wireless Sensor Networks Wireless sensor networks are a field closely related to WNCSs, with a lot of ongoing research. In WSNs a low powered wireless network with hundreds or thousands of nodes are sensing or observing some phenomenon and collaborat‐ ing on environment monitoring to deliver situation awareness to the user [73]. The nodes are small and low cost with a limited operational time [158]. The limited power source of WSN nodes demands for algorithms with low compu‐ tational and communication requirements to enable a long lifetime of the appli‐ cation [59]. The applications range from environmental, agricultural or struc‐ tural health monitoring (forest, crop, earthquakes, bridges, buildings, among others), asset management (inventory surveillance, plant monitoring, and main‐ tenance), to military and battlefield applications (detection of events such as enemy activity, poisonous gases, or radioactivity) [11]. The key properties, applications, and open research problems of wireless sensor networks are summarized in [176], [2] and [59]. The leading research is summarized in [11]. The network related research topics in WSNs are mostly medium access control or routing [73]. The networking issues are similar to WNCSs, but there is usual‐ ly no closed‐loop control and thus the real‐time operation requirement is not as strict as in wireless automation. Reliability is obtained with redundancy and distributed computation. The low power consumption of the tiny network nodes is necessary to save the battery. This boils down to hardware and MAC protocol design, for example in the WiseNET sensor network [43], or TUTWSN developed at the Tampere University of Technology, Finland [78]. Other topics in sensor networks are data compression, storage, transportation, processing and enhancing [54]. Sensor networks can be used as a monitoring system for plants, where the sen‐ sors deliver additional measurements of a plant, independent of the automation
14
system. The increased demand on high efficiency and ecological production require new, cheap, and flexible production monitoring technologies. Industrial wireless sensor networks can be used for production monitoring of energy efficiency and compliance to environmental regulations [59]. Another similar application is the “mobile wireless industrial worker,” where a serviceman can walk in a factory and monitor the nearby sensors and actuators with a wireless handheld device [19]. The issues and challenges of applying a sensor network to factory automation are summarized in [179]. Such lessons are valuable for the deployment of wire‐ less control in industrial environments, as the conditions may be quite harsh, including shadowing and interference from motors and devices [164]. There are some reports on the experiences of sensor network deployments in industrial environments. A four month continuous monitoring campaign of a plant has been reported, where power management protocols and periodic system resets were used [81]. Another example is a sewage overflow control system called CSOnet. This is a metropolitan wide sensor and actuator network, consisting of about 150 wireless sensor nodes, used to control sewage overflow by measuring the water levels in the sewage and controlling storm water flow to prevent overflow, in case of heavy rain [109]. The experiences of a WSN deployment in a mine are presented in [1].
15
2. PRELIMINARIES – NETWORKS AND CONTROLLERS In this chapter preliminary information and relevant theory that are needed later are summarized. First the general assumptions of the WNCS used in this thesis are listed and networked control structures are discussed. A defining feature of WNCSs is packet drop, therefore several packet drop models are presented in Section 2.4. Measurements and estimation of corresponding packet drop model are done in Sections 3.1 and 3.2. In the following sections some controller design and tuning methods for WNCSs are presented. First, a stability criterion for NCSs, used in many of the controller tuning algorithms, is given in Section 2.5. Several PID controller structures suitable for NCSs are then presented in Section 2.6 and later, in Sec‐ tion 3.3, a new control structure is proposed. The internal model control frame‐ work is treated in Section 2.7, including the IMC‐PID controller design. In Section 2.8, some initial approaches in the literature on network adaptive control or control traffic adjustment are reviewed. Network congestion and adaptation methods of control traffic are also discussed. These issues are later developed further in Section 3.5. Finally, Kalman filtering in NCSs with packet dropout is presented in Section 2.9.
2.1. The Networked Control Problem The general problem in the NCS field is related to the stability of the control system in the case of information loss. In a traditional wired control system, the operation is deterministic and the sampling instants are equally spaced. These dynamic systems can be analyzed effectively using the Z‐transform, where several proofs of stability exist, based for instance on the poles of the closed‐ loop system transfer function. In the wireless control case the information flow between some of the compo‐ nents is stochastic and the situation becomes problematic. In this case the stabil‐ ity depends on the varying delay and packet drop of the network. Often also the system is not synchronized or periodic sampling is not possible as the sen‐ sors and controllers are distributed, which means that the Z‐transform cannot
17
be readily applied. This results in stochastic stability proofs or cases where for example all the possible packet drop realizations have to be enumerated for proving the stability. Current wireless control system research has its roots in networked control system theory, as the issues of a shared communication medium are the same. The research problems are mainly related to variable communication time‐ delays and packet losses, and system architecture design, see [179] and [165]. Both fields deal with network protocols [6], [102], transmission scheduling [160], [159], communication and control co‐scheduling [137], [153], traffic reduc‐ tion [27], [92], congestion control [134], [157], and estimation [113], [173], [177]. The main difference between NCSs and WNCSs is that wireless communication is less deterministic because of external interference and finite communication range, but problems with wiring and failing connectors are eliminated. Some of the approaches for proving controller stability include: LQG control [60], Linear Matrix Inequalities (LMIs) [178], Markov Jump Linear Systems (MJLS) [74], the jitter margin, [23], [72], Lyapunov functions [103], power spec‐ trum [94], and optimal communication scheduling for stability [160]. Other control related theory relates to Kalman filtering [144], [171], controller tuning [47], [67], and control performance [91], [94].
2.2. General Assumptions Throughout this thesis certain assumptions on the studied WNCS are made. The assumptions are declared and motivated here. Previously the majority of the literature focused on wired networked control systems. Nowadays, wireless NCSs are also considered. This thesis focuses solely on WNCSs and the simu‐ lated cases are all with a wireless network. Some of the developed theory can be applied to wired NCSs, although the problems with NCSs are exaggerated in the wireless NCS case, as wireless networks are, in general, less reliable than wired ones, because the shared and open transmission medium is susceptible to interference. The wireless network is thus assumed to be unreliable, with time‐varying deli‐ vered quality of service and with the possibility of longer outages. The unrelia‐ bility is either due to the properties of the wireless communication, or due to the used non‐deterministic network protocols, such as CSMA‐type MAC. The adaptive algorithms adapt to the general, slowly changing, performance of the network. Instantaneous accommodation to sudden bursts of packet drops is in practice impossible, and can cause instability due to switching of controller parameters. The adaptation is done slowly, such that problems of instability due to switching of tuning are not an issue, as is customary in adaptive control approaches [184].
18
Time‐driven sensors, controllers, and actuators are assumed the whole time. This implies that the observed delays and delay jitters are effectively quantized to multiples of the sampling interval. This simplifies the analysis, since actions between sampling instants need not be taken into account. In the case event‐ driven controllers and actuators are assumed, which is sometimes the case in the literature, the theory and implementation would be more complicated, as the algorithms would become truly time‐variant. In practice, systems are still asynchronous, as there might be a time‐offset between the sampling instants of the clocks of all the nodes in the WNCS, if they are not synchronized. Random time offsets are automatically used in the PiccSIM simulator. Due to the choice of time‐driven operation, a zero order hold (ZOH) is assumed at the receiver until the next sampling instant. In the case of a dropped packet, ZOH is also used, such that the previously received information is held until a new value is received. The wireless nodes are assumed to be ideal, in the sense that the input/output and computational tasks are always performed on time. The hardware includ‐ ing the microcontroller and radio are not modeled. The scheduling of the tasks in the microcontroller and resulting computational delays are not taken into account in the PiccSIM simulator. It is assumed that the operations are bounded by the sampling interval, such that sampling, transmission, and reception, are executed before the next sampling instant. This is motivated by the short com‐ munication delay compared to the sampling interval, typically observed in the simulations of this thesis. The wireless network is often assumed to reside between the sensor and con‐ troller. The controller is co‐located at the actuator, which then naturally elimi‐ nates one (unnecessary) wireless communication link between the controller and actuator, and the controller can take advantage of the wired power often required by the actuator. Only wireless measurements are assumed in the theory because of technical aspects, where stability proofs are only formulated for this case. In practice, depending on the application, some simulation cases have also wireless communication between the controller and actuator. Stable processes are assumed, as an outage in the network makes the control system work in an open‐loop configuration, which would be detrimental in control of an unstable process. Furthermore, a simple process model is pre‐ ferred. When doing control design, generally a first‐order process with time delay (FOTD) [185] of the form G ( s) =
K − τs e , Ts + 1
(1)
19
where K is the process gain, T is the time‐constant and τ is the time‐delay, is assumed. In the case of higher‐order processes, a first‐order approximation can in some cases be used. The total time‐delay L in a control loop is defined as
L = τ + LN ,
(2)
where LN is the constant minimum communication delay of the network [23]. The control design is always done for the total delay L. On top of the constant time‐delay, an additional varying delay δ(t), caused by the network, is often present. For communication, today’s commercial off‐the‐shelf radios, or similar, are assumed to be used. In the PiccSIM simulations an IEEE 802.15.4 network [11] is always used. This network type is selected, because it is well suited for low power, low bandwidth communication, and the current wireless automation standards use it. Non‐deterministic operation of the network is assumed, main‐ ly due to the CSMA type MAC protocol. Deterministic approaches such as WirelessHART are not considered, because they do not pose the same problems of varying delivered QoS. UDP‐like (User Datagram Protocol) communication is used, since the sensors and controllers are time‐driven and they send packets with a fixed rate. UDP does not have retransmissions in case of packet drop, but this is not required in control applications, since due to the real‐time operation, sending new information is more desired than retransmitting old, which may be outdated when retransmitting. Traffic rate adopting protocols, such as Transmission Control Protocol (TCP), cannot be used in control applications, because of the constant packet rate produced by the sensors. Thus, before dep‐ loyment of a wireless automation system, the designer has to verify, for in‐ stance by simulation, that the bandwidth of the network is adequate for the application. In Section 5.2 a controller with adaptive communication rate is developed to alleviate this situation. In this thesis only problems of packet drops in the network are considered. Due to the time‐driven assumption, packet drop can be thought of as a kind of vary‐ ing delay, as shown in Section 2.4.1, since the controller has to wait for the next measurement packet if the current one is dropped by the network. In the simu‐ lation cases of this thesis the varying delay induced by the network is negligible compared to the sampling interval, thus only packet drop needs to be consi‐ dered in the control design. Only lightweight control algorithms, such as variations of the PID controller, are considered. The low computation capabilities and power saving require‐ ment of wireless nodes necessitates the usage of simple algorithms. PID control‐ ler is also favored because of the widespread use of it in the industry.
20
2.3. Networked Control Structures When designing a control system for a WNCS, the selection of the control struc‐ ture is important, as it determines what information is processed in which part of the network, and what information needs to be communicated to the other nodes in the control system. The controller algorithm can then be constructed with special logic to handle separate cases depending on what information has been received or lost. There are many possible control structures and design approaches for NCSs or WNCSs, of which only some are discussed here. In this work single‐input single‐output (SISO) control loops are mainly consi‐ dered, which can be extended to the multiple‐input multiple‐output (MIMO) case by parallelizing several SISO loops. Other MIMO architectures, such as centralized or hierarchical, are naturally possible. Three main control design and tuning approaches, with more or less traditional control structures, for NCSs are considered next. The first and most complicated approach is to design an optimal controller that can stabilize the process with given delay and loss specifications. In the literature, the controller is usually of state‐feedback type, either time‐varying or constant, and depicted in Figure 1a. The control system may need a state observer at the transmitter, if the state is not directly observa‐ ble. The optimal controller is usually designed by casting it to an optimization problem of linear matrix inequalities, see e.g. [65] and [67]. The math is quite involved and it is thus unlikely that this method will become a mainstream approach in practical applications, where the operator should be able to under‐ stand the control algorithm and be assured that it works properly. During packet drops it seems intuitively clear to use a model to predict the process output at the controller during outages. The objective is to estimate the current process state, as shown in Figure 1b, by using the received intermittent and delayed measurement packets [108]. The network delays are taken into account in the state‐estimator and the state can be predicted if a packet is dropped. In this way there is always a current process state estimate available for the controller, which can be any conventional (non‐network aware) control‐ ler [98]. A suitable estimator for NCSs is the Kalman filter (see Section 2.9), because of its convenient form with a prediction and an update phase. In [141] and [174] a ʺsmart sensorʺ is used, capable of doing some processing on its own. The filtering is done at the sensor and the state estimate is sent over the network. This ensures that the estimate is optimal, since no measurements are lost, and the current state can be calculated by prediction, if packets are dropped. The estimation at the sensor has the downside that the control input to the process has to be transmitted to the sensor without delay and loss, which is not practically achievable. Further, ztate‐estimators at both sensor and con‐ troller can be used to reduce the traffic, by estimating the current process state without the need to transmit all the measurements [177]. In this case the esti‐
21
mates are updated by communication only if the estimation error grows too large. The third alternative is to still use a conventional controller, such as the PID controller (Figure 1c), and tune it to be robust to the packet drops and delay jitter (Section 2.5) [47]. The advantage of this approach is that the PID controller is widely used in the industry. When wireless communication is adopted for control applications, the PID controller is already available in the automation system and implementing a new controller suitable for wireless automation is more laborious than retuning an existing PID controller. Thus, PID controllers will most probably be adopted for wireless control applications. Additionally, the operators are familiar with them, they understand how the control law works, and they have confidence in it. u
yr Reference
control
State feedback
u z
output
Process
y
State estimator
y out y in Network
(a) Optimal state feedback yr
PID
Reference
u
control
Conventional PID Controller
y
output
y
Process
u z
y out y in
State estimator
Network
(b) State estimator and regular PID controller yr Reference
PID
u
control
Network aware PID Controller
output
y
Process
y out y in Network
(c) Jitter margin tuned PID controller Figure 1. Some control structures suitable for networked control systems.
22
u(t)
yr Reference
State feedback
control
output
y (t )
Process
y (t )
y (k) y (k)
in
out
Network
u(k)
yr Reference
PID
control
output
y (k)
Process
y ( k − 1)
y (k)
y ( k − 1)
out
in
Network
Figure 2. Approaches to control with discrete‐time feedback information in NCSs. Discrete‐time signal indicated with dashed line. Top: only communi‐ cation is in discrete‐time, Bottom: Discrete‐time controller.
The simulations in Section 4.7.2 compare these control structures. The rest of the simulations use, in general, the structure of case (c), whereas the proposed Networked PID in Section 3.3 is an attempt to use the advantages of case (a) in a lightweight manner. This is further combined with case (b) to achieve more benefits, in the steady‐state heuristic suggested in Section 5.4. Besides controller structures, the approach of control design with packet based communication, is another fundamental issue. In the literature there are two approaches to deal with the case when the feedback information is received as discrete‐time packets over the network, as depicted in Figure 2. One is to look at the control as a continuous‐time system, where, for implementation reasons due to the network, only the feedback information is in discrete‐time, such as in [103]. In this case, the discrete‐time communication approaches asymptotically the continuous‐time system when decreasing the sampling interval. Typical approaches are state‐feedback controllers [128] or other continuous‐time con‐ trollers with information updated at discrete time‐instants [103]. With truly discrete‐time controllers, the control algorithm is calculated whether a packet is received or not. This might cause some trouble to the correct opera‐ tion. On the other hand, if the control algorithm is only calculated at the recep‐
23
tion of a new packet, the control response changes depending on the timing of the execution events. In this case the constant operation approach is not valid. The controller must be changed as a function of the packet inter‐arrival time or rate, similarly as the PID PLUS controller in Section 2.6.2 or [53], to produce in the same operation as the ideal continuous‐time counterpart. This is typically not done in the literature, e.g. [4], [20], and [128], and as a consequence the control response degrades when the actual sampling interval deviates from the designed one. Proper changing of the controller sampling interval and tuning is shown with one of the developed adaptive control schemes in Section 5.2.2. Both continuous‐ and discrete‐time approaches have their advantages. In the former case, the control design is done in continuous‐time, where event‐driven feedback is most naturally formulated [8], [153]. In the discrete‐time controller case, packet drop is more natural to deal with, as the signal value is hold until the next sampling instant. The resulting network traffic is predictable as the sampling interval is constant, and the implementation is better suitable for scheduled networks.
2.4. Network Models In WNCSs, the essential challenges for the control system are packet drop and delay jitter caused by the network. Delay jitter is in general caused by packet drop, random transmission opportunities in CSMA‐type MAC protocols or different sequences of timeslots in TDMA MAC protocols. In all cases the delay jitter is aggravated in multihop communication, typical for WNCSs, as the de‐ lay accumulates at every hop. Packet drop occurs when there is packet collision, poor signal strength or interference. For simulation of WNCSs and analysis purposes, network models that imitate the packet drop and delay jitter of real wireless networks are needed. In industrial or factory environments the radio propagation signal deviates considerably from the ideal free space propagation models used in most net‐ work simulator models. Besides the simple free space model there exists many other fading models for wireless communication [57]. Metal and obstacles cause shadowing and multipath effects that amplify or attenuate the radio signal strength. The radio environment in a factory can be harsh with interfering elec‐ tromagnetic radiation from motors and moving machinery temporarily block‐ ing links of the wireless network. Reflections of radio waves can in these envi‐ ronments be an advantage, because shadowed locations can obtain a strong signal through reflections. There are several studies of the performance of IEEE 802.11 networks, e.g. [131] where the network design is also discussed. There are some reports on studies of measurements done in industrial environments. The received signal strength in a chemical pulp factory, cable factory and a nuclear power plant was meas‐
24
ured with an IEEE 802.11 network at the 2.45 GHz ISM radio band [77]. The conclusions of the experiments were that the radio environment is not as harsh as initially thought; reflections and diffractions improve the signal strength in shadow areas. The study in Section 3.1 reveals that, while many locations are improved by multipath fading, communication in some locations is impossible, due to no signal or destructive interference, even if the distance is short. Anoth‐ er study presents measurements of the bit‐error‐rate and more importantly, the error pattern, of an IEEE 802.11 network in an industrial environment [162]. Interesting findings were that the packet losses are correlated, error burst and packet loss burst lengths fluctuate several orders of magnitude with time. This means that the consecutive packet drops may be long in some instants and hard to eliminate, for various physical reasons caused by the environment and the radio. On the other hand, error free periods vary also and can be long. Packet loss rates vary from the high 80 % to less than 10 % in generous situations. In the Internet, packet drop is found to be mostly random [15]. In office environments, similar measurements can be made. An example is [169], where the propagation channel is measured. Among the tested models, the Ricean model fits the data best. Ricean models are estimated for different distances and configurations between the transmitter and receiver. Because of multipath propagation, the parameters of the model are not linearly dependent on the transmission distance, as generally assumed. On large scales, the log‐ normal distribution fitted the data well [169]. Wired Ethernet traffic is studied in [87], where the self‐similar property of the traffic is demonstrated. Similar behavior can be assumed with WLAN networks in office environments, as they both use CSMA. Studies of the traffic properties in the Internet have also been done [111]. In this section the focus is on models for the packet drop in the network. This restriction is made because the main limiting factor in real‐time control is the loss of feedback, for instance caused by packet drop. First the relationship be‐ tween packet drop and delay is established. Both simple and data‐based packet drop models, which are adequate for basic simulations of unreliable networks, are developed in the following subsections. For more realistic packet drop be‐ havior of the network, a network simulator, where also the network protocols and packet collisions are taken into account, can be used as discussed in Section 4.3. Real environments have also been measured in this thesis, as reported in Section 3.1, to make the simulation results more realistic. Based on the radio environment measurements, packet drop models are estimated and the model fit is evaluated in Section 3.2. These network packet drop models are integrated into the network simulation model as described in Section 4.3.4.
25
2.4.1. Packet Drop ‐ Delay Jitter Although delay jitter and packet drop are two distinct phenomena with differ‐ ent causes, they are linked in a sense, as the effects on the control system are similar. Consider a controller with zero‐order‐hold. When a packet is dropped, the controller will use the most recently received data. The drop of a packet will thus effectively cause an increase in the delay, seen as a delay jitter. In the thesis the notion delay jitter is used even if the actual underlying event is packet drop. With a pure delay jitter no information is lost, but in a real‐time system it may become outdated and thus useless. In wireless communication, packet drop due to interference or collisions can be approximated with a uniform random packet drop defined by a certain proba‐ bility [15]. Consider a network with a constant delay LN = nh , where n ∈ indicates the delay in terms of sampling intervals h, and a random packet drop with probability pdrop. With time‐driven algorithms and the ZOH assumption at the receiving side, it follows that in the network simulations the output of the network is described by ⎧⎪ yin ( k − LN / h ) , r ( k ) > pdrop yout ( k ) = ⎨ , r ( k ) ∼ U ( 0,1) y k − 1 , otherwise ( ) ⎪⎩ out
(3)
where yin and yout are the input to, and output from the network respectively, and r is a uniformly distributed random number between zero and one. The previous output is thus held if a packet is dropped. The resulting delay jitter caused by packet drop according to the above model is thus δ ( t ) = t − tn ∀t ∈ ⎣⎡tn , tn+1 ⎣⎡ , tn = t when r ( k ) > pdrop
(4)
where tn are the times of the received packets. An example realization of the packet drop induced delay is plotted in Figure 3 for a uniform packet drop probability of pdrop = 0.2, and sampling interval of h = 0.1 seconds. Notice the additional constant minimum delay LN related to transmission. At the receiving side, the communication delay is in certain cases needed by the control algorithm. The delay estimation with a linear estimator, assuming slow‐ ly changing random delay is presented in [139]. A simple delay jitter estimation algorithm for a quickly changing delay, where the delay can change on every time‐ step, is presented next. It relies on counting the timestamps, and the gaps due to packet drop, between the received packets.
26
0.7 0.6
Delay [s]
0.5 0.4 0.3 0.2 0.1 0 0
5
10 Time [s]
15
20
Figure 3. Delay with uniform packet drop probability of pdrop = 0.2, and sampling interval of h = 0.1 s.
On the reception of a packet with timestamp tn‐1, the next packet is expected at time tn‐1 + h, where h is the sampling interval of the sensor. If however one pack‐ et is dropped, the next packet received has timestamp tn = tn‐1 + h + dn, where dn > 0 is the additional delay. The delay difference d is the difference in time‐ stamps between the two most recently received packets tn‐1 and tn according to
dn = tn − tn−1 − h .
(5)
If dn = 0 , there is no delay jitter. To record the delay jitter, the tuple: delay jitter dn and timestamp tn, of the received packet are stored. In practice, the delay statistics of a given time‐window ⎡⎣t − TW , t ⎦⎤ of length TW is used. Thus, all the jitters from the current time‐period TW , are collected in D(k).
{
}
D ( k ) = dn , tn |∀tn ∈ ⎡⎣t ( k ) − TW , t ( k ) ⎤⎦ .
(6)
Here t(k) refers to the current time. The maximum delay jitter in the time‐frame ⎡⎣t ( k ) − TW , t ( k ) ⎤⎦ is defined as
{
}
δmax ( k ) = arg max D ( k ) . d
(7)
This delay counting is used in the adaptive jitter margin controller of Section 5.1 and the notion of packet drop caused delays (4) is used in all the simulations.
27
The assumptions of this method are that every packet has a timestamp and that the delay jitter seen by the controller is only due to dropped packets. This means that the delay variation of successfully transmitted packets is considera‐ bly smaller than the sampling interval of the controller. In most applications this can be assumed if the network is small and the communication times are small compared to h. A more complex delay estimation algorithm, which avoids these assumptions, is the Kalman filter based maximum a posteriori method presented in [P7].
2.4.2. Drop and Delay Models based on Markov‐chains Instead of using a static drop probability as a model for the network, a Markov‐ chain can be used to model correlated network delay or packet drop [172]. In this section several Markov‐chain packet drop models are described and Gil‐ bert‐Elliott model identification presented. These models are identified from data in Section 3.2 and used later in the thesis in the simulations. A Markov‐chain is a sequence of random variables Χ ( k ) defined by the proba‐ bility of being in a state χ according to
(
)
P = Pr Χ ( k + 1) = χ ( k + 1)|Χ ( k ) = χ ( k ) ,
(8)
where P = ⎡⎣ pij ⎤⎦ is the state‐transition matrix, giving the probability of changing from state i to state j. The steady‐state state distribution of the Markov‐chain is given by the left eigenvector of the equation π = πP , corresponding to the ei‐ genvalue 1. [34] For modeling a network with a maximum delay jitter of δmax , a Markov‐chain with N M = δmax / h states, each corresponding to a delay value, can be used. The delayed output of the network is then dictated by the current state of the Mar‐ kov‐chain.
If a network with constant delay and only packet drops is considered, a Mar‐ kov‐chain can also be used. In this case the delay increases by one sampling interval if a packet is dropped, or it returns to the minimum delay if a packet is transmitted successfully. Thus, with uniform random packet drop and a maxi‐ mum number of consecutive packet drops of N M = δmax / h , the Markov chain state‐transition matrix is of the form
⎡1 − pdrop ⎢ ⎢1 − pdrop P=⎢ ⎢ ⎢ 1 ⎣
28
pdrop
0
0 0
0
0 ⎤ ⎥ 0 ⎥ . pdrop ⎥⎥ 0 ⎥⎦
(9)
pGB
Good dG
pGG
Bad dB
pBB
pBG
Figure 4. Gilbert‐Elliot model with states Good and Bad. State‐transitions and probabilities indicated.
In the case that the packet drop probability is not uniform, different transition probabilities can be used for the separate states and thus correlated packet drops can be simulated. Other Markov chains are also possible, see e.g. [172]. A common way to model a network with packet drops is the Gilbert‐Elliott (G‐E) model [41], [56], which is based on the Markov‐chain. The G‐E model has two states: one corresponding to good (G) and the other to bad (B) conditions, with separate packet drop probabilities in the good and bad state, P ( drop |G ) = dG and P ( drop | B ) = dB , respectively. The transitions between the states follow a two‐state Markov model. The state‐transition matrix is given by
⎡p P =⎢ GG ⎣ pBG
( (
) )
pGB = P Χ ( k ) = B|Χ ( k − 1) = G , pGG = 1 − pGB pGB ⎤ , ⎥ , pBB ⎦ pBG = P Χ ( k ) = G |Χ ( k − 1) = B , pBB = 1 − pBG
(10)
where pGG and pBB are the state‐holding, and pGB and p BG are the state‐ transition probabilities as illustrated in Figure 4. The state residence time of state i is given by
TGE ,i =
h , 1 − pii
(11)
where h is the time‐step of the Markov‐chain. The average good and bad state probabilities of the G‐E model are πG =
pBG pGB , πB = , pBG + pGB pBG + pGB
(12)
and the mean packet drop is [66] dGE = πG dG + πBdB .
(13)
In Section 3.2, the Gilbert‐Elliott model is fitted to the data collected from an industrial environment. These models are implemented, as explained in Section 4.3.4, for realistic simulation purposes. To fit the G‐E model to the data, the two drop probabilities ( dG and dB ) and the state‐transition probabilities ( pGB and 29
p BG ) must be identified from the data. The model identification is a Hidden Markov Model fitting problem [66], where the observations, in this case the packet drops, are available and the underlying states and emission probabilities are estimated. To evaluate the model fit on the data, using second order statis‐ tics over different time‐scales is a standard approach [66]. The time‐scales are defined as follows. The stochastic process Χ can be ex‐ amined on different time‐scales m by taking the average of non‐overlapping blocks of size m Χ( m ) ( k ) =
(
)
1 Χ ( mk − m + 1) + m
+ Χ ( mk ) .
(14)
For time‐series with little data, averaging with a sliding window or partly over‐ lapping windows of size m can be used. The model fit is evaluated by the mean packet drop (13) and the normalized error in standard deviation σnorm ( m ) =
σD ( m ) − σGE ( m ) σ D ( 1)
,
(15)
where σ D and σGE are the standard deviations of the data and the Gilbert‐Elliott model, at time‐scale m. The error (15) is zero if the variances coincide and one if the difference in variances is as large as the variance in the data. The overall model fit is evaluated with the mean of the normalized standard deviation error over logarithmically spaced time‐scales, listed in the set M σtot =
1 M
∑
m∈M
σnorm ( m ) .
(16)
The statistical properties of the Gilbert‐Elliot model and higher order Markov models are derived in [66]. The coefficient of variation
cv ≡
σ (Χ)
E (Χ)
(17)
for the G‐E model is cv ( m ) = =
1 m
⎛ 2 p p ( 1 − p − p )( d − d )2 1 GB BG G B − 1 + ⎜ GB BG 2 ⎜ ( p + p )( p d + p d ) dGE GB BG GB B BG G ⎝
⎞ ⎛ ( 1 − p − p )m ⎞ , (18) GB BG ⎟⎜1− ⎟ ⎟⎜ m ( pGB + pBG ) ⎟ ⎠⎝ ⎠
from which the variance at different time‐scales can be calculated σGE ( m ) = cv ( m ) dGE .
30
(19)
2.5. Jitter Margin Control with packet drops and varying delay stemming from a network is a complex case to analyze, because of the stochastic and time‐varying nature of the problem. Ensuring stability of NCSs has been under much research lately [65]. Some results deal with optimal control [95], jump‐linear Markov models [172] and the jitter margin [23], [72]. The jitter margin [23] defines the amount of additional delay that a control system can tolerate without becoming unstable. The delay may vary in any way, provided that it is bounded by the jitter margin δmax. By selecting a tuning of a conventional controller such that the control loop has a positive jitter mar‐ gin, the control loop is stable for network induced delay jitter and packet drop bounded by the jitter margin. The theorem for the jitter margin states that in the continuous‐time case, the closed loop system with process G(s) and controller Gc(s) is stable for any addi‐ tional delay 0 ≤ δ ( t ) ≤ δmax in the loop, if [72] Gcl ( s ) =
G ( jω ) Gc ( jω )
<
1 + G ( jω ) Gc ( jω )
1 δmax ω
, ∀ω ∈ ⎣⎡0, ∞⎡⎣ ,
(20)
or equivalently
δmax <
1
Gcl ( jω ) ω
∀ω ∈ ⎣⎡0, ∞⎡⎣ .
(21)
In the discrete‐time case the criterion becomes
( ) ( ) < 1 + G (e )G (e ) N G e jω Gc e jω jω
1
jω
c
max
e jω − 1
, ∀ω ∈ ⎡⎣0, ∞⎡⎣
(22)
where N max = δmax / h
(23)
is the jitter margin in terms of sampling intervals Nmax and h is the sampling interval of the control loop. The mixed discrete‐continuous‐time case is the same as (22), provided that the sampling interval is chosen properly, i.e. suffi‐ ciently small, to prevent aliasing. The jitter margin is in essence an extension to the phase margin [72]. In case of only packet drop, the delay follows a sawtooth shape, as in Figure 3, and the Mirkin’s lemma [106] can be used, which makes the jitter margin 57 % less con‐ servative.
31
2.6. The PID Controller in Networked Systems PID controllers have the reputation of being simple, yet delivering acceptable performance. The wide use of them in industry suggests that it will be applied in the NCS case also. The traditionally used controllers, such as the PID control‐ ler, have been shown to work well also in the networked control case [47]. In this thesis the discrete‐time PID controller of the form ⎛ Td N d ( z − 1) h + u ( k ) = Kp ⎜ 1 + ⎜ T ( hz − 1) (T + N h ) z − T i d d d ⎝
⎞ ⎟ e ( k ) , ⎟ ⎠
(24).
is used, where the control signal u is calculated based on the error signal e = yr − y between the set‐point value and the actual process output. Kp is the controller gain, Ti and Td the integration and derivation time, respectively, Nd is the derivative filter constant. The sampling interval h is naturally used at the sensor also, and determines the packet rate over the wireless network. Ti, and Td are related to the PID controller integral and derivative gains Ki and Kd through
Ki = K p / Ti , Kd = K pTd .
(25)
2.6.1. Tuning of PID controllers in Networked Control Systems The tuning of PID controllers for different cases and requirements is an exten‐ sively studied topic with an abundance of tuning rules and methods [185]. Tuning of PID controllers for networked control systems is a difficult task be‐ cause of the varying delay induced by the network, where stability is hard to show. Some methods use Lyapunov functions [103], LMIs [178], MJLS [74], or power spectrum [94]. Another method to guarantee stability is to use the jitter margin theorem presented in Section 2.5. In the following, some approaches and PID controller tuning methods for varying time‐delay systems are pre‐ sented. These are used in the adaptive control schemes and simulations of this thesis. PID controller tuning methods and rules for varying delay control systems using the jitter margin theorem have been developed by Eriksson [47]. The tuning rules are developed into formulas, for the Kp, Ki and Kd gains of the PID controller. The basics of one tuning rule are briefly repeated here. This tuning is used in some of the simulation studies in Sections 4.7.1‐5.1. Consider a first order lag plus integral plus delay process (the so‐called FOLIPD model)
32
G ( s) =
K e − sτ , s (Ts + 1)
(26)
where K is the velocity gain, T the time‐constant, and τ is the time‐delay, the PID tuning is given in the form [48] Kp =
a aT , , Ki = 0, Kd = KL KL
(27)
as a function of a tuning parameter a which depends on the desired jitter mar‐ gin δmax
a=α
0.9485 L , δmax + 0.6356 L
(28)
where α is a tightness factor describing how close to the stability bound the tuning is selected. Usually as tight tuning as possible is selected with α = 1. In the tuning, the total constant delay including the process and minimum net‐ work delay L (2) is used. The control system with a PID controller tuned in this way can tolerate any excess delay that is smaller than δmax, without the risk of instability. The para‐ meter a gives the maximum gain of the tuning for the given jitter margin. There are other similar tuning methods, each giving the gain a through different for‐ mula depending on the design goal. Instead of using the jitter margin, the ro‐ bustness to delay jitter can be obtained by optimizing the worst cost of several step responses with different network realizations, though this method does not guarantee stability [129]. The PID controller can also be tuned by an optimization procedure. When using optimization, there is a choice of several different cost functions that can be used to evaluate the control performance. The most common are the IAE, ISE (Integral of Absolute/Square Error) and the ITAE, ITSE (Integral of Time‐ weighted Absolute/Square Error) criteria [185] t2
JIAE = ∫ e ( t ) dt
(29)
t1
t2
JITAE = ∫ t e ( t ) dt
(30)
t1
t2
JISE = ∫ e 2 ( t ) dt
(31)
t1
t2
JITSE = ∫ te 2 ( t ) dt
(32)
t1
33
where e ( t ) = yr ( t ) − y ( t ) is the difference between the reference and the output of the process. The cost criterion is usually evaluated over a step response be‐ ginning at t1 until the response has settled down at t2, and minimized with re‐ spect to the PID parameters. The time‐weighted cost criteria emphasize the steady‐state error and discount the transients in the beginning, whereas the other costs are suitable for measuring the impact of disturbances. The cost crite‐ ria can also be used in multiobjective optimization of PID controllers for NCSs [48], where the control performance is optimized with a target desired jitter margin or jitter margin constraint. In the next section and Section 3.3 two variants of the PID controller are pre‐ sented, which are modified to better suit the NCS case where packet drops are present.
2.6.2. The PID PLUS Controller A variation of the PID controller is an event based PID, which is an extension of the conventional PID controller to varying calculation interval, where the integral and derivative parts take into account the time passed since the pre‐ vious iteration [181]. The PID PLUS controller is a heuristic PID control ap‐ proach to packet drops developed by industry [147]. The main idea of the PID PLUS is to implement an integral anti‐windup type scheme to the controller for dropped measurement and control packets. The integral and derivative actions of the controller are calculated over the time‐ interval between two consecutively received packets. Thus, the PID PLUS is event‐driven in the sense that if no new information is received, the control output is constant. The structure of the PID PLUS controller is depicted in Figure 5. The filter equation that replaces the integral action is
Figure 5. PID PLUS controller block diagram [147].
34
)(
(
)
f ( k ) = f ( k − 1) + u ( k − 1) − f ( k − 1) 1 − e − ΔT /Ti ,
(33)
where f is the output of the filter, u is the controller output, Ti is the integration time, and ΔT is the time‐difference between two consecutively received pack‐ ets. The integral filter is derived from the typical integral anti‐windup scheme 1 with filter F ( s ) = , which acts as an integrator when arranged in a positive Ti s + 1 feedback loop
F ( s)
1 − F ( s)
=
1 . Ti s
(34)
( )
Discretizing the filter with sampling interval h leads to F q −1 =
1− γ , where 1 − γq −1
γ = e − h/Ti . The input to the filter is the previous control value, thus
(1 − γq ) f ( k ) = (1 − γ ) u ( k − 1) ⇒ f ( k ) = f ( k − 1 ) + ( u ( k − 1) − f ( k − 1 ) ) ( 1 − e −1
− ΔT / Ti
).
(35)
The last implication is obtained by adding and subtracting f ( k − 1) , and replac‐ ing the constant sampling interval h with the time‐difference ΔT , which results in the filter equation for the PID PLUS (33). The derivative is calculated according to the approximation of the derivative where the time since the previous measurement ΔT is taken into account
uD ( k ) = Kd
e ( k ) − e ( k − 1) ΔT
.
(36)
The integral and derivative thus depend on the time between the previous measurement packets and they are only calculated when a new measurement has arrived and the new value flag is set by the communication stack as indi‐ cated in Figure 5. The PID PLUS scheme is compared to the proposed IMC tuning and outage heuristic in Section 5.4.
2.7. Internal Model Control The IMC control approach, first brought to a comprehensive framework by [55], uses a model Gm ( s) of the process Gp ( s) . The difference between the model and the process is fed back to the controller Gc ( s) . In the case of a perfect model, choosing Gc ( s) = Gm ( s)−1 , yields perfect control. To make the controller realiza‐ ble a low‐pass filter
35
G f ( s) =
1
( λs + 1)
n
(37)
with an appropriate integer n to make the closed‐loop strictly proper, and a positive tuning parameter λ, is added to the controller. Now the closed loop transfer function becomes Gcl ( s) = G f ( s ) .
(38)
Thus, λ determines the speed of the control, and a step response of desired speed is achieved.
2.7.1. Internal Model Control Design In practice the following steps are taken due to problems caused by noise, mod‐ eling error, and problems when inverting the model. If the process model is non‐invertible, it is split into an invertible Gm− (s) and a non‐invertible Gm+ (s) part, Gm ( s) = Gm+ ( s)Gm− ( s) . The non‐invertible part Gm+ (s) contains all positive zeros and time‐delays e − Ls , which upon inverting would become unstable or non‐realizable. The rest of the model consists of the invertible part, which is incorporated into the controller Gc. The non‐invertible part is treated as un‐ modeled dynamics and is handled by the feedback. The IMC approach can be transformed into an output feedback control loop, with the model included in the controller. With elementary block diagram alge‐ bra the IMC controller becomes [55] Gm− ( s)−1 G f ( s) Gc . = GIMC ( s) = 1 − GmGc 1 − Gm+ ( s)G f ( s)
(39)
The obtained closed‐loop system then becomes
Gcl =
GIMCGp
1 + GIMCGp
GfG =
+ p
Gp− Gm−
⎛ + Gp− ⎞ 1 + G f ⎜ Gp − − Gm+ ⎟ ⎜ Gm ⎟ ⎝ ⎠
,
(40)
where Gp ( s) = Gp+ ( s)Gp− ( s) similarly as for the process model, which if the process model is exact Gm− = Gp− and Gm+ = Gp+ , reduces to
(
Gcl = G f Gp+ .
)
(41)
That is, the obtained closed‐loop system is a low‐pass filter with the desired time‐constant λ, and the non‐invertible part, which cannot be avoided.
36
In practice, especially in NCSs where the communication is with discrete pack‐ ets, the controller is implemented as a discrete‐time algorithm. Either with con‐ tinuous‐time design followed by discretization of the controller, or the control‐ ler is designed in discrete‐time from the start. The discrete‐time design proce‐ dure is similar to the continuous‐time case, using the same controller structure. In the discrete‐time case, given a continuous‐time process model Gm ( s) the model can be discretized to Gm ( z) using a suitable discretization method. In the discrete‐time case, the non‐invertible part contains the delays, z‐d, all zeros outside the unit circle, and negative zeros inside the unit circle, which otherwise cause oscillations in the control signal. The separation is not unique, but the all‐pass form is advantageous [55]. For all p1 zeros vi outside the unit circle, a pole at 1/vi is added, forming an all‐pass form of the non‐invertible part. All p2 oscillating zeros wj in the left‐half unit circle are as well included, ba‐ lanced by a pole at zero. The non‐invertible part thus becomes p1 ⎛ z − vi ⎞ ⎛ 1 − 1 / vi ⎞ p2 ⎛ z − w j − d +1 Gm+ ( z ) = z ( ) ∏ ⎜ ⎟⎜ ⎟∏ ⎜⎜ i =1 ⎝ z − 1 / vi ⎠ ⎝ 1 − vi ⎠ j =1 ⎝ z
⎞⎛ 1 ⎟⎟ ⎜ ⎜ ⎠ ⎝ 1 − wj
⎞ ⎟ ⎟ ⎠
(42)
The corresponding discrete‐time low‐pass filter is G f ( z) =
(
(1 − γ )
n
1 − γz −1
)
n
,
(43)
where
γ = e−h/λ
(44)
gives the relationship between the continuous‐time λ and the corresponding discrete‐time filter coefficient γ. More elaborate IMC controller design discussions can be found in [135] and [86]. The case with an IMC controller in a NCS is studied further in Section 3.4. In the following, an IMC based tuning method for PID controllers, also used in the thesis, is described.
2.7.2. IMC‐PID Controller Design The IMC design procedure can result in a conventional PID‐type controller with certain model choices or approximations. Implementing the IMC controller with a PID controller means that the tuning of a PID controller is selected based on the IMC design [135]. This is called IMC‐PID tuning, and is often readily implemented as the PID control structure is simple and available in automation products.
37
The IMC‐PID tuning usually involves approximations to convert the IMC con‐ troller to PID‐type. Additionally, the delay of the non‐invertible part of the IMC controller (39) must be approximated to implement the controller. An example of an IMC‐PID tuning rule for a first‐order process with time‐delay, approx‐ imated with the first‐order Padé approximation and filter‐order n = 1 is [135] T ⎛ τ ⎞ 1+ , ⎜ K(λ + τ) ⎝ 2T ⎟⎠ 1 , Ki = K(λ + τ) Tτ Kd = , 2K(λ + τ) Kp =
(45)
with a pre‐filter G f ( s) =
1 λτ , where Tf = . Tf s + 1 2 ( λ + τ)
(46)
This tuning is used in the simulation cases of Sections 5.2 and 5.4. A table of other IMC‐PID design alternatives can be found in [135].
2.8. Network Quality of Service in Networked Control Systems In networked control systems several control loops are distributed in a plant and connected with a shared wired or wireless network. The goal is for the network to deliver sufficient QoS with minimum effort to obtain a desired qual‐ ity of control [149]. In the literature there exists no systematic study to assess the control performance in relation to the network quality of service. The con‐ trol performance is usually measured with the traditional integral cost func‐ tions, see Section 2.6 [71]. An example of network and control performance comparison is in [90] and [92] where different wired networks and their effect on the control system are studied as a function of controller sampling interval. In this section the effect of the network quality of service on the control perfor‐ mance is discussed. First network congestion, which may cause information loss, and traffic rate control from the control application point of view are con‐ sidered. The rate control algorithms for control systems reviewed in the next subsection are the first methods in the literature in the field of network adaptive control. In Section 3.5 this issue is further studied where a network QoS cost for control systems is presented. Practical insights are gained in the simulations of Sections 4.7 and 5.1.
38
2.8.1. Network Performance Considerations The network traffic in control applications is considerably different than in computer networks. In WNCSs, for example, periodic communication of a small amount of data, for example a measurement value, needs to be communi‐ cated reliably in real‐time. In computer networks, typical usage is the transfer of files in burst of large packets, where the average throughput is important. The required QoS is thus significantly different in WNCSs compared to computer networks, and transferring the knowledge from the computer network field to WNCSs is not straightforward. The key characteristics of a wireless network on closed‐loop control are com‐ munication delay and packet loss. Typical end‐to‐end delay of a moderately sized IEEE 802.15.4 network for control applications is less than 100 ms, see Section 4.7.3. Wireless control may not be applied to very fast or unstable processes, due to the inherent unreliability of the wireless communication. The current wireless standards, WirelessHART and ISA100.11a, are both intended for applications where delay jitter of about 100 ms is tolerated [166]. In other words, these networks can be considered deterministic when examined at larg‐ er time‐scales. In current practical stable wireless control applications, the mi‐ nimal sampling interval is about 1 s. This is reflected in the devices sold today, where the sampling interval is restricted by the device manufacturers to a min‐ imum of one second, partly also because of energy constraints. In this thesis, the opposite case is considered with non‐deterministic networks where the traffic in the network affects the network QoS and further the control performance. In wireless networks the QoS can never truly be guaranteed, as interference can always hamper the communication. This is especially troublesome in wireless control, since deviating from the real‐time operation can cause physical dam‐ age. The overall wireless automation system must thus be designed such that the probability of a fault is low and that no damage is caused when a fault hap‐ pens. A good networked control system design should exhibit graceful degra‐ dation, where the control performance is minimally, or non‐catastrophically, degraded when the network QoS decreases, and turn to safe operation when the network malfunctions. This restricts the wireless control to stable processes, where the process remains steady even if the control is open‐loop. There are trade‐offs between the packet rates, control performance and network congestion. The network performance depends mostly on the utilized MAC protocol, as it determines the access to the network [96]. The general perfor‐ mance of a CSMA type MAC is good with low traffic, but becomes poor with increased traffic, mainly due to larger probability of collisions. This is further aggravated by the first‐come last‐served behavior of the exponential backup mechanism in the case of a collision.
39
If a low sampling rate compared to the process dynamics is used, the control is poor. Increasing the sampling rate improves to control performance until the network becomes congested and the control performance start to degrade, due to packet drops or increased communication delay. As naturally, the control performance generally degrades when packets are dropped and thus less in‐ formation is available at the controller. In a NCS with limited bandwidth there exists an optimal region, in terms of the control performance, for the sampling interval of the control loops [92]. Selecting an optimal control bandwidth is called cross‐layer optimization, where the performance of the whole system is optimized by tailoring the different parts of the system to suit each other, to obtain optimal performance.
2.8.2. Network Congestion and Traffic Rate Control In wireless networks, the techniques for guaranteeing a specified QoS for the user can be divided into two parts: admission control and scheduling. With admission control, users are admitted to use the medium only when the net‐ work can guarantee to meet the user’s QoS request. Then the task becomes to schedule or prioritize the admitted users on the available bandwidth such that everyone gets the best possible QoS, according to their needs. For more details, see for example [68] and references therein. This framework is called radio resource management, where the target is to deliver specified QoS guarantees to each user. The required QoS depends on the application, and can be for example a band‐ width, a delay, or a packet drop constraint. In WNCSs the problem is how to share the limited available bandwidth among all the control loops, such that every loop attains an equal control performance. There exist several algorithms to calculate the optimal bandwidth shares, sampling intervals [42], [71], or transmission schedules to be allocated for each controller, for instance by using utility functions, which describe the control quality, given a certain bandwidth [27]. These methods rely on a model of the network and the control system, and the optimal, according to some criterion, schedule or allocation is calculated beforehand. Perfect communication or a simple the network model is usually assumed, where a certain bandwidth is divided among the control loops. Other bandwidth control approaches are dynamic scheduling or transmission heuristics such as maximum‐error‐first [159], where the sensor with the largest error should transmit. These methods have the drawback that the schedule needs to be updated continuously or the transmission opportunities arbitrated online in real‐time, which consumes bandwidth and may even be impossible in practice. Similar issues are encountered in embedded control systems with task scheduling [22], but many of the results cannot be applied to WNCSs as the scheduling must be distributed over the network, which requires communica‐
40
tion, in contrast to the processor scheduling where all information is locally available. In reality, the network is more complex and the actual performance of the net‐ work is different than assumed, because of interference, other overhead traffic, and simplified models. The operation of the network can additionally change over time, for example when new devices are installed, or when the traffic changes depending on the control tasks which are currently executing. This calls for online control adaptation in WNCSs, which adjusts the sampling inter‐ val and used bandwidth of networked controllers. A new method for control system traffic adaptation is proposed in Section 5.2. There are many rate control approaches proposed in the literature. The adap‐ tive rate fallback method, which is used in IEEE 802.11, is a network layer communication rate adjustment method. The communication bit‐rate is adap‐ tively reduced in case of poor signal strength. This takes care of poor communi‐ cation conditions, by reducing the bit‐rate, and thus increasing the robustness of communication, when the radio signal quality is diminished [32]. An exam‐ ple is in [99], where modulation and coding rates are changed depending on the network QoS. Radio resource management tries only to optimize the communication depend‐ ing on the channel conditions. There must furthermore be cross‐layer optimiza‐ tion for adaptation on the application layer to adjust the generated traffic; oth‐ erwise queues will fill up with data that the network cannot deliver [90]. One example of application layer congestion control is the Rate Adaptation Protocol [134]. It adjusts the amount of data to be sent for a multimedia stream depend‐ ing on the network QoS, namely packet loss. In control systems, the corresponding action is to change the sampling interval. Adapting the sampling interval to the available network bandwidth are the first attempts in the literature on network adaptive control. In [32] and [33], several different update rules to change the sampling interval between specified bounds depending on the round‐trip‐time are proposed for cases of packet drop either due to interference or congestion. In [74], the sampling interval is adapted based on dropped packets, using parameter estimation of a Markov model related to the network state, similar to the Gilbert‐Elliott model (Section 2.4.2), followed by a certain sampling update policy. Another approach is to decide the length of the next sampling interval based on the available band‐ width and the control error [157]. In this case a criticalness factor is assigned to each loop and a heuristic formula determines how much the control error af‐ fects the sampling interval adjustment. A control‐oriented approach is to use a PI controller with saturation to change the sampling interval of the control system based on packet drop feedback [128]. In this particular case, no adjustment of the process controller tuning is,
41
however, done, which is designed for a nominal sampling frequency of 200 Hz. The saturation of the PI controller limits the sampling interval to a minimum of 100 Hz, to prevent instability due to operation far from the designed sampling frequency. The trade‐off between the designed and operating sampling fre‐ quency is shown in [20], where a method for pre‐selecting a discrete set of sam‐ pling intervals is presented, such that the degradation because of operation too far from the design point is bounded. A better approach is presented in [53], where a discretized PI controller with the sampling interval left explicitly in the control algorithm is used. A heuristic sampling interval and gain scheduling selection algorithm that depends on the packet drop and delay jitter of the net‐ work is then used. The adjustable sampling interval in the controller ensures nominal behavior even if the packet rate is changed. The heuristic controller has also been simulated with an ns‐2 based network and control co‐simulator in [152]. Another control‐oriented approach is a queue controller where the transmission rates are controlled by a P or PI controller [4]. It is assumed that maintaining a user specified queue length at the routers (intermediate nodes) results in a de‐ sired network QoS. A utility function of each control loop based on the process dynamics determines how much of the shared link capacity is used by the loop. A P‐controller is applied in [170], with an adaptive gain depending on the traf‐ fic amount relative to the point of catastrophic congestion, for control over a WLAN. Congestion control of the Internet is beginning to use control system algorithms too, such as PID controllers, instead of the heuristic TCP [136]. Most of the algorithms in the literature so far require hand selected parameters, and are thus very application specific, needing extensive testing to be applied on different systems. The adaptive control speed scheme developed in Section 5.2 adapts to the QoS of the network and does not need any arbitrary parame‐ ters, only the time‐constant of the controlled process. It adjusts the sampling interval and controller tuning, such that a specified network QoS is obtained.
2.9. Kalman Filtering in Networked Control Systems As pointed out in Section 2.3, the Kalman filter (KF) is suitable for estimation in NCSs, especially when there are measurement packet drops. It is an optimal state‐estimator, given a process model (in state‐space form: Ap, Bp, Cp) and asso‐ ciated state‐ and measurement noise covariances, Q and R. The assumed state‐space model of the process is ⎧⎪ x ( k + 1) = Ap x ( k ) + Bp u ( k ) + w ( k ) , ⎨ z k C x k v k = + ( ) ( ) ( ) ⎪⎩ p
42
(47)
where w and v are normally distributed white noise sequences with covariances Q and R, x is the state vector, u the input vector, and z is the measurement vec‐ tor. The matrices Ap, Bp, and Cp are constant and have appropriate dimensions. The state‐matrices can also be time‐varying. Kalman filtering is performed according to the well‐known prediction and update equations [104]. Estimation in the NCS case, with intermittent information, that is packet drops, is straight‐forward with a Kalman filter, due to the fact that the algorithm is divided into a prediction and an update step. If there is no new measurement, only the prediction step is carried out. This is obvious, since the prediction is the best estimate of the current state, if no new observation is received. This is equivalent to receiving a measurement with infinite variance [144]. In this case the Kalman gain KKF tends to zero, resulting in no update, as the measurement noise R tends to infinity. The case of missing information can be extended to partial measurements, where only a part of the measurement vector is received. The arrival rates of the measurements determine the stability of the KF, if the process is unstable. If the rates are too low, the KF is unstable. The upper and lower bound of the stability border can be calculated, by showing that the, in this case stochastic, state cova‐ riance P is bounded [97]. Optimal Kalman filtering with varying measurement delay is treated in [141], where the previous measurements, the state and the covariance estimates are stored in buffers and the filtering is done up to the current time every time a new measurement arrives. This is computationally heavy and the delay or time‐ stamps of the measurements must be known. The convergence is proven with LMIs [65]. The paper additionally presents estimation with a constant Kalman gain. The Kalman filter can be used to additionally estimate a process load distur‐ bance, by augmenting a state for the load disturbance into the model. Assuming a constant load disturbance, the model used in the KF is thus as follows ⎧ xˆ KF ( k + 1) = AKF xˆ KF ( k ) + BKF u( k ) ⎨ yˆ ( k ) = CKF xˆ KF ( k ) ⎩ ⎧ ⎡ Ap 0 ⎤ ⎡ xˆ ( k ) ⎤ ⎡ Bp ⎤ , ⎪⎪ xˆ KF ( k + 1) = ⎢ ⎥⎢ ˆ ⎥ + ⎢ ⎥ u( k ) =⎨ ⎣⎢ 0 1 ⎦⎥ ⎣⎢ Dload ( k ) ⎦⎥ ⎣⎢ 0 ⎦⎥ ⎪ yˆ ( k ) = ⎡⎣C p 1⎤⎦ xˆ KF ( k ) ⎩⎪
(48)
where xˆ KF is the augmented Kalman filter state‐vector. The load disturbance estimate is obtained through ˆ ( k ) = C xˆ ( k ) = ⎡ 0 1⎤ xˆ ( k ) . D load load KF ⎣ ⎦ KF
(49)
43
The Kalman filter is used in the simulations of Section 4.7.1, where a varying delay KF is used, and in Section 4.7.2, where the state‐estimator with conven‐ tional PID control is compared with a jitter margin tuned PID. In Section 5.4.3 the KF filter is used as a load disturbance estimator for the control reference target.
2.10. Summary In this chapter the preliminaries of WNCSs were summarized. The focus of the control methods is on the PID controller, because of the lightweight algorithm suitable for wireless nodes and its widespread use in the industry. Several PID controller alternatives suitable for WNCSs are presented, both from the tuning and algorithmic viewpoint. The main control algorithms being IMC, IMC‐PID, and the PID PLUS controller. Network models for packet drops are considered based on drop probabilities and Markov chains. The properties of the Gilbert‐Elliott model are recapitu‐ lated. Then the network quality of service from the control system point‐of‐ view is discussed. The communication and control performances are related. The network QoS is related to the traffic in the network, which can be affected by changing the controller sampling interval to attain a comfortable communi‐ cation rate. Several existing methods for QoS guarantees or rate control are reviewed and special attention is put on methods for traffic rate control of con‐ trol systems. In the literature, this field is still developing and only some tenta‐ tive methods have been devised. In the literature, controller rate adaptation is the first and, to the knowledge of the author, the only existing network adaptive control methods existing today. This field is further developed in the next chap‐ ter and several new methods are presented in Chapter 5.
44
3. NETWORKS AND CONTROLLERS IN P RACTICE In this chapter the issues presented previously are developed further and moti‐ vated using practical measurements and simulations. Measurements of radio environments are done to estimate network packet drop models, which are later used in the simulation. Properties of the control algorithms and tuning methods important for NCSs are established. The results are extensively used in the proposed adaptive control schemes of Chapter 5 The relation between the network quality of service, the controller tuning and performance is studied. The effect of the network QoS on the control system is discussed and a network cost for control measure is proposed. These issues are important in the network QoS adaptive control schemes of Chapter 5.
3.1. Measurements of Radio Environments Realistic packet drop models of wireless networks are needed to study WNCSs, since information loss ought to affect the control performance. One approach is to measure the packet drop with wireless nodes in authentic environments and use these results to build data‐based network models. This allows one to study the effects and the behavior of a real network on the control system in a particu‐ lar environment. The packet loss models obtained in the next section are uti‐ lized in the simulations of Sections 4.7.3 and 4.7.4, to obtain realistic simulation results. The physical properties of an existing radio environment are assessed by carry‐ ing out actual measurements at the target site, as described next. The transmit‐ ter device consists of a sensor node equipped with a Texas Instruments CC2431 radio module connected to two monopole WLAN antennas, with a separation of 12.5 cm. Similarly four receivers are arranged in an array, placed 6.25 cm from each other, which is half the wavelength at 2.4 GHz. The IEEE 802.15.4 radio channel 26 is used, which has the least frequency overlap with the IEEE 802.11 radio, to mitigate packet drop due to WLAN interference and other de‐ vices. Transmission power is set to 0 dBm and measurements are taken for several different distances and locations. The transmitter switches between the two antennas for every consecutive packet, thus eight different signal paths are
45
recorded. A total of 15000 packets of size 119 bytes are transmitted for each location at an interval of 0.1 seconds. Packets are recorded with their RSSI value (Received Signal Strength Indicator) and an indication if the packet was correctly received with no bit errors, or dropped. These measurements differ from other similar measurements, e.g. [77] where the received signal strength is only measured, not the actual packet re‐ ception. Here, the same hardware as would be used in a real application is used, not a specialized measurement device, which could differ significantly from the signal reception capabilities of the actual device. Measurements are performed in an industrial assembly hall and an office. The estimated models are presented in the next section. In the industrial hall there are machines, racks of tools, and open spaces. Measurements are made in dif‐ ferent parts of the hall, which can be categorized as light: open space, medium: mostly open with machines standing on the floor, and heavy: metal racks of tools obstructing the line‐of‐sight. The distances between the transmitter and receiver for the different measurements are in the range of 25 ‐ 35 m. The office is an indoor environment with plaster walls, used as a reference in the simula‐ tions of Section 4.7.3. The measurements are made from room to room as de‐ picted in Figure 6. No communication was possible across more rooms in this environment. The measured packet drop probabilities from the prototype locations of the real office are shown in Figure 7. The drop probability is given for every antenna pair, ordered such that every odd numbered link represents a transmission from antenna 1. The packet drop probability varies from location to location and there is significant variation between the antenna pairs. This implies that 3.9 m
4.5 m
4.5 m
3.9 m 1.5 m
Wireless node
2
Plaster wall Brick wall
3
5 1
6
4
Figure 6. Measured prototype locations and links for the office. Transmitter to receiver direction indicated by arrow.
46
the signal strength is very sensitive to the antenna location, due to multipath fading. Similar results are obtained in the industrial hall case shown in Figure 8, with even more variability between the antenna pairs. The histograms in Figure 9 show the number of consecutively dropped packets in the industrial hall. The distribution is long tailed, as it is about linear on the log‐log scale. This implies that long network outages are possible, although unlikely. The control system should thus be designed to handle outages of unbounded length. One method is proposed in Section 5.4. As the most common outage of the previous results is one packet, the minimum outage length is upper bounded by the packet interval 0.2 s. The question re‐ mains what the shortest outage is. Therefore, several new measurements in the medium environment of the industrial hall are done with a faster packet rate. One transmitting antenna with a 10 ms packet interval and four receivers are used. The histograms for the consecutively dropped packets are shown for one representative result of the second measurement campaign in Figure 10, from which it is evident that the most frequent outage is about 40 ms, which is over a decade less than the packet interval in the first measurements. The average outage length, which varies from 0.01 s to 1 s, for all the new measurements are given in Figure 11. Prototype location 1
100 50
Packet drop [%]
0
50
1
2
3
4
5
6
7
8
Prototype location 3
100
1
2
3
4
5
6
7
8
Prototype location 4
50
1
2
3
4
5
6
7
8
Prototype location 5
100
0
1
2
3
4
5
6
7
8
Prototype location 6
100
50 0
0
100
50 0
Prototype location 2
100
50
1
2
3
4 5 6 Link [#]
7
8
0
1
2
3
4 5 6 Link [#]
7
8
Figure 7. Measured packet drop probabilities in the office for all the proto‐ type locations and links.
47
Heavy 1
100 50 0
50 1
2
3
4
5
6
7
0
8
Heavy 3
100 Packet drop [%]
1
2
3
4
5
6
7
0
8
Medium 1
3
4
5
6
7
8
7
8
7
8
7
8
Heavy 4
1
2
3
4
5
6
Medium 2
100
50
50 1
2
3
4
5
6
7
0
8
Light 1
100
1
2
3
4
5
6
Light 2
100
50 0
2
50
100 0
1
100
50 0
Heavy 2
100
50 1
2
3
4 5 6 Link [#]
7
0
8
1
2
3
4 5 6 Link [#]
Figure 8. Measured packet drop probability for different locations in the industrial hall. 10 10
4
Heavy 1, Link 1
10
2
10
0
10 0 10 10
Occurences [#]
10
2
10 Heavy 1, Link 3
10
2
10
2
10 10 Heavy 1, Link 5
10
2
10
2
10
2
4
1
10 Heavy 1, Link 6
10
2
2 0
1
10 Heavy 1, Link 7
10
2
10 0 10 10
1
10
0
10 0 10
10
2
10 0 10 10
0
10
4
1
10 Heavy 1, Link 4
0
1
1
10 0 10
2
10 0 10 10
0
10
Heavy 1, Link 2
0
1
1
10 0 10
4
2
1
10 Heavy 1, Link 8
10
2
1 0
10
1
10
2
10 0 10
Consecutive packet drops [#]
10
1
10
2
Figure 9. Histogram of consecutive dropped packets for every link in in‐ dustrial hall, Heavy 1 measurement point.
48
Occurences [#]
10
10
Link 1
4
10
2
0
0
10 -2 10
Occurences [#]
10
10
Link 2
5
10
0
Link 3
4
10 -2 10
10
2
10
0
10
0
Link 4
4
2
0
10 -2 10
0
10 Outage length [s]
10 -2 10
0
10 Outage length [s]
Figure 10. Histogram of outage length of second measurement campaign of medium industrial environment. 1
0.4
0.5
0.2
Average outage time [s]
0
1
2
3 4 Link [#]
0
0.2
0.1
0.1
0.05
0
1
2
3 4 Link [#]
1
2
3 4 Link [#]
0
1
2
3 4 Link [#]
1
2
3 4 Link [#]
0.2 0.1 0
Figure 11. Average outage time for all the links in the second measurement campaign of medium industrial environment.
49
3.2. Estimated Gilbert‐Elliott Models Based on the measurements performed in the industrial hall and the office, presented in the previous section, Gilbert‐Elliott packet drop models are identi‐ fied from the data. The data from all the links are individually fitted to separate G‐E models using the Baum‐Welch algorithm [12]. As an example, the identification results of the prototype location 4 from Figure 6 are illustrated in Figure 12. In general, packet drop probability of the good state is low whereas the drop probability of the bad state is high and the time spent in the good state (11) is longer than in the bad state. There are, however, large variations among the different links. Similar results are obtained for the other locations and the industrial hall case. Link 1
400
Link 2
100 1000
40
300 200
50
500
20
100 Good
60
Link 3
Bad
0 100
40
0
Good
30
Link 4
Bad
100
20 50
20 0
0
50 10
Good
10
Link 5
Bad
0 100
0
Good
20
Link 6
Bad
0
Packet drop [%]
State residence time [s]
0
100
15 5
50
10
50
5 0
Good
600
Link 7
Bad
0 100
Good
40
Link 8
Bad
0 100
30
400 50 200 0
0
20
50
10 Good
Bad
0
0
Good
Bad
0
Figure 12. Gilbert‐Elliott model for each link of prototype location 4 in the office. Grey bar indicates mean state‐residence time (11) and black bar packet drop probability.
50
||σ tot||
Location 1
Location 2
0.25
0.25
0.25
0.2
0.2
0.2
0.15
0.15
0.15
0.1
0.1
0.1
0.05
0.05
0.05
0
1 2 3 4 5 6 7 8 Link [#]
0
Location 4
||σ tot||
Location 3
0
1 2 3 4 5 6 7 8 Link [#] Location 5
Location 6
0.25
0.25
0.25
0.2
0.2
0.2
0.15
0.15
0.15
0.1
0.1
0.1
0.05
0.05
0.05
0
1 2 3 4 5 6 7 8 Link [#]
0
1 2 3 4 5 6 7 8 Link [#]
1 2 3 4 5 6 7 8 Link [#]
0
1 2 3 4 5 6 7 8 Link [#]
Figure 13. Mean of the normalized standard deviation error between data and Gilbert‐Elliott model for measurements from the office.
The mean packet drop (13) absolute error between the fitted models and the measurements are small, less than 0.01, for all the links in both studied envi‐ ronments. In Figure 13 the mean of the normalized standard deviation error (16) for different time‐scales (14), are shown between the data and the model. The model fit is satisfactory, with normalized errors of less than 25% of the total standard deviation for both the office and industrial hall environments. A better fit than the one obtained with the Baum‐Welch algorithm may be obtained by doing a mean square error fit of the model using the variance [66]. A better fit is also achieved by using a Markov‐chain model with more states. With three states the maximum standard deviation error is less than 10%.
3.3. The Networked PID Controller As noted previously, the PID controller is likely to be applied in industrial wire‐ less control systems. The conventional PID controller can be modified to suit WNCSs better. One proposal is the PID PLUS controller discussed previously in Section 2.6.2. A new PID controller structure for NCSs proposed in this thesis is the Networked PID (NPID) controller, with the architecture shown in Figure 14.
51
yr Reference u PID Networked Controller
control
output
y
‐ e
Process
Smart Sensor
eΣ eΔ
out
in
Wireless Network
Figure 14. Networked PID control in a NCS setting.
The Networked PID is a PID controller split into two parts and distributed over the network, where part of the algorithm is at the sensor. Thus a “smart sensor” with some computational abilities is needed. The only additional information the sensor needs, is the reference signal. On the sensor side the error e, integral of error eΣ , and derivative of error eΔ are calculated. The three terms are then transmitted to the controller where the final control signal is calculated at the actuator side by u ( k ) = K p e ( k ) + Ki eΣ ( k ) + Kd eΔ ( k ) .
(50)
This division is motivated by the good properties of the control architecture of Figure 1a, where the estimate, or in this case e, eΣ , and eΔ , are updated and exact, even if packets are dropped. Whenever the controller receives a packet, the control signal is correct. If no data from the sensor is received, the previous‐ ly received values can be held, as in conventional PID control in a NCS setting. A similar approach is in [132], with event‐driven control, where the proportion‐ al, integral, and derivative actions are coded and transmitted to the controller. The Networked PID architecture is further motivated by the extension to event‐ or self‐triggered control [25], [132], [153], where the control signal is updated only if necessary, to save bandwidth [8]. In this case some additional computa‐ tion at the sensor is needed, to decide when to send information to the actuator. The control signal would for example only be transmitted if it has changed more than some threshold. Next, the behavior of the NPID during an outage is analyzed. If the control packet is dropped, the output error is
52
y=
GGc y − Guhold , 1 + GGc r
(51)
where uhold is the most recently received control value. The notable difference to the conventional PID controller is that there is no windup in the output if pack‐ ets from the sensor are dropped. There is still some integral windup on the sensor side, as the integral is calculated there uol = GcGuhold .
(52)
This issue is further studied in Section 3.5. The behavior of the Networked PID controller is shown in the simulations of Section 5.4.3, where it is compared and shown to behave better than a conventional PID controller during network outages.
3.4. Internal Model Control in Networked Control Systems In Section 2.7 the basics of IMC controller synthesis was covered. Now, the essential properties related to applying the IMC controller in NCSs are studied. First, approximation of the closed‐loop step response, which is used in the out‐ age compensation heuristic of Section 5.4 is analyzed to determine when the heuristic can be used. Then the stability of the IMC controller in a NCS setting with delay jitter is established.
3.4.1. Approximations of Closed‐loop Step Response A model is rarely perfect, but some approximations to get a closed‐loop transfer function of the desired form Gcl ≈ G f Gp+
(53)
can be considered. If the non‐invertible part is identical to one, or exactly known, this reduces the closed‐loop system to
Gcl ≈ G f G
+ p
Gp− Gm−
.
(54)
The deviation from the desired closed‐loop system thus depends on the ratio between the invertible part of the process and the model. Assuming a first order process, this can be analyzed by noting that
Gp− − m
G
=
K Tm s + 1 Km Ts + 1
(55)
53
is a lag filter. The maximum error in gain is
KTm and in phase K mT
⎛ T / Tm − 1 ⎞ φ = − sin −1 ⎜ ⎟ [37]. If the parameters of the process K and T are suffi‐ ⎝ T / Tm + 1 ⎠ ciently well known, which can often be assumed, these errors are small and (55) Gp− can be approximated as − ≈ 1 , and further (54) leads to Gcl ≈ G f Gp+ . Gm If the process is of higher order, the closed‐loop system will roll‐off at higher frequencies, as the process denominator is in the denominator of (53). As an example, take a process with two real left‐hand plane poles, one pole is can‐ celed by the process model in the IMC design and the other is left in the closed‐ loop. On the other hand, the non‐invertible part may contain a delay, which must be approximated in the controller implementation (39) and thus is never exact. Assuming the invertible part can be canceled out Gm− = Gp− , the closed‐loop transfer function is
(
Gcl ≈
G f Gp+
(
)
1 + Gp+ − Gm+ G f
)
.
(56)
The approximation to a closed‐loop of a form Gcl ≈ G f Gp+ is possible if the in‐ equality
(G
+ p
)
− Gm+ G f
1
(57)
holds, which depends on the difference between the non‐invertible part of the process and the model. In the case that the non‐invertible part is a time‐delay Gp+ = e − Ls , suitable ap‐ proximation alternatives are for instance one of the following:
e − Ls ≈ 1 − Ls , first order Taylor approximation, 1 1 , first order inverse Taylor approximation, and e − Ls = Ls ≈ 1 + Ls e e − L / 2 s 1 − L / 2s e − Ls = L /2 s ≈ , first order Padé approximation. 1 + L / 2s e
(58) (59) (60)
The Taylor approximation is the coarsest approximation of them all. Assuming the non‐invertible part in (57) is a pure time‐delay, approximating it with the Taylor approximation leads to
54
(G
+ p
−G
+ m
)G
=
e − Ls − 1 + Ls
=
( λs + 1) 2 − 2 ( cos ( Lω ) + Lω sin ( Lω ) ) + L ω f
n
2
=
(
1+ λ ω 2
2
)
2
n
(61)
∀s = jω ,
1
which cannot be solved analytically. The approximate value of the left side of the inequality of (61) is zero at low frequencies, and at high frequencies ( ω → ∞ ) L / λn . The inequality
λn
L
(62)
must thus hold according to (57) for the approximation (53) to hold in case of a time‐delay in the process.
3.4.2. IMC Control and Jitter Margin Using the IMC controller design and assuming that the approximation (53) holds, according to the previous discussion, then + p
Gcl ≈ G f G =
e − jωL
( jλω + 1)
n
=
1 jλω + 1
n
=
1
(( λω) + 1) 2
n /2
.
(63)
The control loop is thus stable according to the jitter margin (21) for
(( λω) + 1) < 2
δmax
ω
n /2
,
∀ω ∈ ⎣⎡0, ∞⎡⎣ .
(64)
When n = 1, the minimum is at infinity, which gives a jitter margin of δmax = λ . For n > 1 the jitter margin is solved by taking the derivative of (64), solving the 1 minimum, which is ω* = and finally substituting ω* in (64) results in λ n−1
δmax
⎧ λ ⎪ n /2 =⎨ ⎛ n ⎞ ⎪λ n − 1 ⎜ ⎟ ⎝ n−1⎠ ⎩
, n = 1 , n > 1
(65)
for the jitter margin. Conversely, the corresponding tuning λ can be solved, given a jitter margin constraint. In the case of a FOTD process, where the non‐invertible time‐delay makes the approximation (53) invalid, as the delay must be approximated in the control
55
implementation, the above stability to delay jitter is not guaranteed. The jitter margin depends then on the approximation method used for the time‐delay, e.g. one of (58)‐(60). Now Gcl is according to (56) and the jitter margin inequali‐ ty (21) becomes
δmax <
( λjω + 1)
n
+ Gp+ − Gm+
∀ω ∈ ⎣⎡0, ∞⎡⎣ ,
,
ω
(66)
where Gp + = e − jωL and Gm+ is one of the delay approximations, which reduces to (64) if the delay is insignificant. Using the Taylor approximation (58) and n = 1 results in
δmax <
2 1 2 ( λ + L) sin(ωL) + ( λ + L ) . − 2 ω ω
(67)
In [48] a similar inequality is solved numerically. Using the same technique, the jitter margin is approximately δmax ≈ 0.9562 ( λ + L ) − 0.6431L .
(68)
With a negligible delay this approximation is close to the case without a delay given in (65) (n = 1). The discrete‐time case of (63) using (43) leads to
( )
Gcl e
jω
(1 − γ ) ≈
n
e − jωL
(1 − γe ) jω
n
=
(1 − γ )
(
n
1 − γe jω
)
n
(69)
and the restriction for the jitter margin after manipulation becomes
(1 − γe ) jω
N max <
n
e jω − 1 ( 1 − γ )
n
(1 + γ =
2
− 2 γ cos ( ω )
)
n/ 2
2 − 2 cos ω ( 1 − γ )
n
.
(70)
Substituting n = 1, the global minimum of (70) occurs at π, which gives
N max,n=1 =
( γ + 1) = 1 2 (1 − γ ) 1 − e
−h/λ
−
1 . 2
(71)
The approximation N max,n=1 =
56
1 1 1 1 − ≈ − ≈λ/h −h/λ 2 1 − (1 − h / λ ) 2 1− e
(72)
holds for large λ. For n > 2 there exist closed form solutions to (70) but numeri‐ cal solutions are more convenient. As an example, the jitter margin for the con‐ tinuous‐time (65), (67) and discrete‐time cases (22), (71) are plotted later on in Figure 16.
3.4.3. Sampling Interval and IMC Tuning for Jitter Margin The selection of sampling interval of discrete‐time controllers need to be consi‐ dered, especially as in the case of packet drop the induced delay jitter is depen‐ dent on the sampling interval and number of consecutive dropped packets. The rule of thumb for selecting the sampling interval h for control of a first‐order process is Tr ≈ ⎡ 4 … 10 ⎦⎤ = N h , h ⎣
(73)
where Tr is the rise‐time of the closed‐loop system [183]. Using the IMC design with a specified time‐constant λ = Tr , this equates to a sampling interval of h = λ / Nh .
(74)
This relation between the IMC tuning and sampling interval is also supported by the linear relationship between the IMC λ and the jitter margin seen in (72), or Figure 16. The resulting jitter margin with this selection of tuning and sampling interval is obtained by combining (72) with (74), which gives
Nh ≈ Nmax,n=1 .
(75)
Thus, a suitable jitter margin in terms of consecutive packet drops with a dis‐ crete‐time IMC controller can be selected directly by specifying Nh and using (74) to get the controller tuning parameter, given a fixed sampling interval. To illustrate this, the jitter margin (22) according to the case described in Section 5.2.4 (closed‐loop control with IMC designed controller of process with time‐ constant of T = 10), is solved numerically and plotted as a function of IMC tun‐ ing parameter λ in Figure 15. Without quantization the obtained jitter margin is as specified at Nmax = Nh = 8, and with quantization the jitter margin is at least the specified one. To accommodate the conventional PID control to the WNCS setting, the con‐ troller can be made robust to the network or “network aware” by selecting a tuning such that the control loop is stable for a specified delay jitter, for exam‐ ple by the jitter margin theorem. The jitter margin of the PID controller (24) with the IMC‐PID tuning (45) without the pre‐filter is plotted in Figure 16, for a first order process with K = 1, T = 10, and τ = 0.1 (discrete‐time controller para‐ meters: h = 0.1, Nd = 5). The jitter margin for the IMC‐PID controller is solved
57
numerically, as a closed‐form minimum of (21) is infeasible. The approxima‐ tions (65) and (72), which coincides, are also given. The control is stable for less than Nmax consecutive drops where N max = δmax / h (23). The discrete‐time controller has a larger jitter margin than the equivalent continuous‐time controller, whose jitter margin practically saturates to about 2 seconds. By increasing the sampling interval, the jitter margin is increased. This is a consequence of the limited possible delay jitter values in discrete‐time. The jitter margin of the IMC‐PID controller without the pre‐filter is less than the approximations indicate. With the pre‐filter, the actual jitter margin coincides with the approximations.
18 δ max, quantized h 16
δ max, h = λ/Nh
Jitter margin, Nmax
14 12 10 8 6 4 0
10
20 30 40 IMC controller tuning parameter λ
50
Figure 15. Jitter margin measured in consecutive packet drops with IMC controller as function of λ with (96) and without (74) sampling interval quantization.
58
60
10 Continuous-time Discrete-time, h = 0.01 Discrete-time, h = 0.05 Discrete-time, h = 0.1 Continuous-time CL appr. Discrete-time CL appr.
Jitter margin, δ max
8
6
4
2
0
1
2
3
4
5 6 7 IMC tuning parameter, λ
8
9
10
Figure 16. Jitter margin for first order process with K = 1, T = 10, and τ = 0.1. Continuous‐ and discrete‐time PID controller with IMC tuning. Jitter mar‐ gin for closed‐loop system (21) and approximation given in Section 3.4.1 (65) and (72). The approximations for the closed‐loop system coincide.
3.5. Effect of Network Quality of Service on Control Performance The network quality of service for a control loop must be guaranteed for main‐ taining stable networked control. There exist theoretical data rate or packet drop rate bounds for stability [120], but they do not indicate the respective quality of control or control performance. The problem is how to measure con‐ trol performance related to network QoS [165]. A common approach is to use some of the integral of error criteria (29)‐(32) [71], [92]. An increase in these measures with decreasing network QoS indicates degradation in control per‐ formance due to network related problems. An approach to relate network QoS and control performance is through packet drop. The packet drop sequence, in other words, the number of consecutive packet drops, is important for the control system. Packet drop or outages affect the real‐time operation, as no feedback is received. A control loop can usually
59
tolerate packet drops occasionally, but control is impossible if several consecu‐ tive packets are dropped, although the average packet drop may be lower in this case. This is in contrast to computer networks, which usually transfer large files over a network, and only the average throughput and packet drop are important. A controller in a WNCS must be designed to handle single packet drops, since these cannot be avoided, because of fading and interference in the wireless network. Larger gaps, when the network is congested, for example during link breaks, routing, moving obstacles, or during extra traffic in the network, are detrimental to the control system and may lead to instability, e.g. the jitter margin is exceeded. The network used for control should thus be designed to minimize consecutive packet drops, eliminate outages and quickly recover from these. The selection of MAC and routing protocols is important in this case. The MAC protocol should guarantee fair and regular access to the medium. The routing protocol should quickly switch to an alternate route during link breaks or changed traffic conditions, if the QoS of the network does not satisfy the requirements of the control system. For the routing protocol Sections 4.7.1 and 4.7.2 give some in‐ sight into how the network affects the control loop at low and high mobility scenarios.
3.5.1. Network Cost for Control In the literature there exists no network QoS measure specifically for NCS. One control related network QoS measure proposed here is the network cost for control (NCC) measure, J NCC . The objective for the NCC is to indicate the net‐ work performance experienced by the controller. This is done through the packet drop statistic, or length of consecutive packet drops, as the outage length directly affects the control performance, as outlined next. Consider a PID controller Gc controlling a process G. The closed‐loop response is y=
GGc y . 1 + GGc r
(76)
When measurement packets are dropped between the sensor and controller, and ZOH at the controller input is assumed, the control is open‐loop and the open‐loop response yol is yol = GGc ( yr − yhold ) ,
(77)
where yhold is the most recent received value before the outage. The difference between normal and outage operation is
60
y = y − yol = GGc yhold −
⎛ ⎞ GGcGGc GGc yr = GGc ⎜ yhold − yr ⎟ 1 + GGc 1 + GGc ⎠ ⎝
(78)
which in the case of a long outage, when the transient dynamics have ended, can be approximated by the integral windup effect as GGc contains an integra‐ tor of the PID controller. In this case y≈
1 1 yhold − yr ) = ehold = tehold , ( s s
(79)
where ehold is the difference between the desired and controller observed output. During an outage of length Tout with a (constant) non‐zero reference error ehold, the total squared output error, cf. the ISE criterion, is given by
y (Tout ) = 2
Tout
∫ ( te ) dt = e
Tout
2
hold
0
∫ t dt = 2
hold
0
ehold 3 3 Tout ∼ Tout . 3
(80)
The error induced by a network outage given by (80) leads to the conclusion that the network cost for control should be proportional to the third power of the outage length. In WNCS with discrete‐time packets, the outage length trans‐ lates to the number of consecutive packet drops. The NCC is only valid for open‐loop stable systems. If the system turns unstable, the error grows expo‐ nentially instead. The NCC is related to stability through the jitter margin (Section 2.5), where stability is guaranteed until a certain outage length determined by the jitter margin is exceeded. If the control loops have different delay jitter stability mar‐ gins, using the δmax‐normalized outage length might be more appropriate to evaluate the outage cost y (Tout ) = 2
δ
Tout
∫ 0
2
T 3 ⎛ t ⎞ ehold out 2 Tout ∼ e dt = t dt . ⎜⎜ hold ⎟ ∫0 2 2 ⎟ δmax δmax ⎝ δmax ⎠
(81)
The NCC is now defined as the average outage cost J NCC =
1 Dhist ( k )k 3 , ∑ N k
(82)
(
)
3 , and N is where Dhist is a histogram as a function of the drop length k k 3 ↔ Tout
the count of the total number of packets. The NCC measure (82) is applicable if packet drop affects all control loops simi‐ larly, and they all the same sampling interval. The δmax/h‐normalized outage length can instead be used,
61
J NCC,δ =
1 hk 3 h D k = 2 J NCC ( ) ∑ hist 2 N k δmax δmax
(83)
if the control loops have different jitter margin limits or sampling intervals h. To measure the average NCC, the count of consecutive packet drops is collected in a histogram Dhist as a function of the drop length. The histogram is accumu‐ lated over N sent packets. The sum ∑ D hist ( k )k is the total number of packet k
drops, where k is the histogram bin. The network cost for control J NCC is then a k3‐weighted sum of the number of outage lengths, and averaged over all the sent packets N. As an example, suppose that n of N packets are dropped in equally long bursts of nB packets. Assume also that n is divisible by nB , and that in N there is room for n / nB separate packet bursts. Then the NCC becomes
J NCC ( n, nB , N ) =
1 n 3 n 2 n = n . ∑ N k =nB nB B N B
(84)
Thus, fixing the number of dropped packets, the NCC is minimized with single n packet bursts J NCC ( n ,1, N ) = and maximized when all the packets are N n3 dropped in one long burst, J NCC ( n , n , N ) = . A network protocol should try to N minimize J NCC , that is favor single packet drops over consecutive packet drops, to deliver a good QoS for the control loop.
3.5.2. Simulations for Network and Control Performance Relationship Simulations of a first order system (1) with K = 1, T = 10, and τ = 2 and an IMC‐ PID controller (see Section 2.7.2) discretized with h = 0.5 seconds tuned with different values for λ are done. Several simulations with a Gilbert‐Elliott model with average packet drop probabilities ranging from 0 to 0.5 are made. The parameters of the model are dG = 0 , dB = ⎡⎣0, 0.98 ⎦⎤ , pGG = pGG = ⎣⎡0, 0.99 ⎦⎤ , if pBB = 0 then pGG = 1 . To average out the particular packet drop realizations, the average of 1000 individual step responses is calculated. The cases of choosing three values for the controller tuning parameter λ are shown in Figure 17, with and without a measurement packet outage in the beginning of the step. The control cost measured with the ISE criterion (31) is plotted as a function of packet drop probability and network cost for control (82) in Figure 18. The cases show the following situations: With λ = 1 the control is tight and the best performance, which deteriorates with increasing packet
62
1.6 λ=1 λ=2 λ=4
1.4
1.2
0.8
0.6
0.4
0.2
0
0
5
10
15
20 Time [s]
25
30
35
40
Figure 17. Step responses comparisons of differently tuned IMC‐PID con‐ trollers with (solid) and without (dotted) measurement packet outage be‐ tween t = 2 ‐ 3 s.
Control cost (ISE)
8 λ=1 λ=2 λ=4
6 4 2 0
0
5
10
15
20 25 30 Packet drop probability
35
40
45
50
8 Control cost (ISE)
y
1
λ=1 λ=2 λ=4
6 4 2 0
0
5
10 15 Network cost for control
20
25
Figure 18. Control cost as function of packet drop probability and network cost for control (82).
63
drop, is obtained. With λ = 2 a slightly lower control performance is obtained, but a graceful degradation is evident, as the control cost barely increases with an increased NCC. In the last case with λ = 4, a too conservative tuning is se‐ lected. In this case the control performance might even improves with increased packet drops, due to more aggressive control induced by integral windup. While the network drop probability does not give a clear indication of the con‐ trol performance, there is an approximate linear correspondence between the NCC and the ISE cost. Applying the NCC in a more realistic case is done in the crane control example of Section 4.7.4, which shows a similar behavior between the NCC and control performance. Furthermore, the network should deliver equal QoS to all the control loops. Although the overall packet drop rate may be low, a control system cannot function properly if all the packet drops are concentrated in one loop. No con‐ trol loop should experience more packet losses than the other loops. The as‐ sumption of equal packet drop requirement is natural, if the control loops are tuned with the same assumption of network performance or packet drops, for example using the same jitter margin. Control loop fairness can be measured by comparing the NCC between differ‐ ent control loops. Provided that every control loop has equal packet drop re‐ quirements, every control loop should have an equal NCC. The standard devia‐ tion of JNCC calculated over all the M control loops is then a measure of the packet drop fairness among the control loops 2
σNCC =
1 M ∑ ( J − J ) , M i =1 NCC ,i NCC
(85)
where JNCC is the average network cost for control and JNCC,i is the cost of the ith control loop. The packet drop fairness is employed in Section 4.7.3 where the performances of the control loops are compared.
3.6. Summary In this chapter some practical results and control design considerations were presented. The radio environments of two sites are measured and Gilbert‐Elliott models are fitted to the observed packet drops. These models are integrated to the PiccSIM simulator and will be used in the simulations later on. The step response and stability properties of the IMC design in the NCSs are established. It turns out that the tuning parameter of the IMC design directly determines the jitter margin of the resulting controller. In the discrete‐time case the tuning gives the stability in the number of consecutive dropped packets. These results are used in the adaptive control systems developed in Chapter 5.
64
The relationship between network QoS, control performance, and stability is established. With decreasing network QoS, the control performance decreases, as indicated by the proposed network cost for control‐measure, until instability of the control system is reached. The NCC is further motivated and visualized by simulations.
65
4. PICCSIM – TOOLCHAIN FOR NETWORK AND CONTROL CO‐DESIGN AND SIMULATION This chapter presents and discusses the simulation platform PiccSIM, which is a toolchain for design and simulation of WNCSs. PiccSIM stands for Platform for integrated communications and control design, simulation, implementation and model‐ ing. The aim of PiccSIM is to deliver, as the name suggests, a complete toolset for developing a wireless control system. The tools in PiccSIM range from the beginning of the design of the system, through simulation and system testing, to implementation of a wireless control system. The main purpose of PiccSIM is a co‐simulation tool for network and control system simulation in a networked control system setting. It is intended for research on NCSs and WNCSs. The main characteristics of the PiccSIM platform are: -
Co‐simulation of network and control system. Graphical user interface for running simulations or batches of simula‐ tions. Several integrated tools for network and control system design and modeling, and a controller tuning tool. Control of a true process in real‐time or simulated process over a user‐ specified simulated (if available in ns‐2) or real network. Automatic code generation from Simulink model block diagram for im‐ plementation on actual wireless nodes. Remote user interface for doing student laboratory experiments over the Internet or for sharing the PiccSIM platform with other researchers.
The PiccSIM simulator is an integration of Matlab/Simulink where the dynamic system is simulated, including the control system, and ns‐2 [118] where the network simulation is done. The PiccSIM Toolchain, is a graphical user interface for network and control design, realized in Matlab. It is a front‐end for the PiccSIM simulator and delivers the user full access to all the PiccSIM modeling, simulation and implementation tools.
67
There are several reasons to build a co‐simulation platform consisting of Matlab and ns‐2. Matlab and Simulink are widely employed research tools used in dynamic system simulation, providing efficient tools for control design. Control engineers are accustomed to working in this environment. Ns‐2 [118], on the other hand, is the de facto standard for network simulation in the communica‐ tion research community. Ns‐2 simulates the network on a per packet basis, with models for MAC, routing and transport protocol layers. The wireless communication part of ns‐2 includes radio models with propagation time, sig‐ nal propagation and fading models, error thresholds for received signal strength. The decision to use pre‐existing simulators is supported by the mini‐ mum amount of maintenance needed to improve the simulation environment, and the advantage of using well known and powerful tools. Models for new wireless technologies, such as routing protocols, are frequently developed for ns‐2 [18]. The PiccSIM simulator is presented in more detail in Section 4.3 [P2]. Other existing network and control co‐simulation tools are reviewed in Section 4.2 [P6]. The PiccSIM Toolchain is described in Section 4.4. The Toolchain has sev‐ eral tools for setting network properties and controller tuning suitable for net‐ worked control systems. With the GUI, simulation and management of both simulators is made easy. The advantage of integrating all in one tool is that it is easy to study all aspects of communication and control, including the interac‐ tion between them. PiccSIM enables automatic code generation from the simu‐ lation model to actual wireless network nodes, as presented in Section 4.6. The simulated system can thus be tested with real hardware with no extra pro‐ gramming effort. The PiccSIM simulator has, in addition to the PiccSIM Toolchain, two remote graphical user interfaces, presented in Section 4.5. The remote interfaces are applets based on the MoCoNet system, for accessing the simulation functions with a web‐browser, without the need to install PiccSIM [P1], [P2]. One of the interfaces is for students, which enables convenient fields for inputting control‐ ler tuning and running experiments; and the Researcher’s Interface, which offers all researchers the opportunity to use the PiccSIM simulator.
4.1. Development of the Co‐simulation Platform The PiccSIM platform has been developed by the author over several years, starting from the summer of 2004. In the beginning it was in the form of the MoCoNet (Monitoring and Controlling Educational Laboratory Processes over Internet) platform. The platform was at that time developed for educational purposes, specifically for enabling remote laboratory experiments. Much of the architecture has survived to the PiccSIM platform. The remote user interface, communication with Matlab and a simple network simulator were already implemented then. [P1]
68
Much, though, has changed over the years. Some features, such as network simulation, have been improved, and some features have remained the same, for instance the MoCoNet user interface. Completely new features have as well been developed, such as the PiccSIM Toolchain. Already in the MoCoNet platform there was a possibility to simulate a network by routing packets through a simple network simulator. The simulator delayed the packets according to a specific time‐delay distribution and implemented a random packet drop with a certain probability, imitating statistically the delay and packet drops of a network [P1]. The simulations were run approximately in real‐time. In the PiccSIM platform the network is simulated with the ns‐2 simu‐ lator, which is more realistic, since it actually simulates packets traveling in a user specified network [P2]. Now simulations are run as fast as possible, with the aid of time‐synchronization between the simulators [P3]. When reading the older publications related to this thesis (mainly [P1], [P2], and [P4]), one has to keep in mind that some information presented there may be outdated, because of the constantly evolving development of the simulation platform. For example, the connection between Matlab and the remote user interface was previously implemented with the Matlab Web server, but is now replaced by the Java Native Interface. On the simulation side, the time‐ synchronization mechanism between the two simulators was developed quite late. This improvement changed slightly some of the simulation results. New, more accurate simulations have been done for this thesis. The most recent pub‐ lication [P3] depicts the current situation most accurately. Next, other NCS simulators are surveyed and in the following sections an up‐ dated documentation of the PiccSIM platform is presented.
4.2. Review of Networked Control System Simulators The PiccSIM Toolchain is unique, because it enables the design simulation and implementation of wireless control systems in one framework. There are other similar WNCS simulators, but they do not deliver any design support, or auto‐ matic code generation for actual wireless nodes. PiccSIM is also rich on simula‐ tion features, as it is comprised of two simulators. WNCS or sensor network simulators can be divided into several categories: network simulators with control or application extensions; control system simu‐ lators with network simulation extensions; sensor node simulators, where the actual code of the sensor node is executed; and hybrid simulators, where a network and a control simulator is combined.
69
In addition to the simulators reviewed here, there are plenty of network only simulators, which cannot be applied for WNCS simulation as such. Some of the simulators are the sensor network simulation tools and testbeds: ns‐2 [118], TOSSIM [89], OMNeT++, J‐sim, WISENES [83], and Cooja [38], which do not consider real‐time plant dynamics, control or actuation. For other network simulators, mostly aimed at sensor networks, see e.g. [35]. PiccSIM can also be used for sensor network application simulation, where a typical WSN simula‐ tion case would be testing a distributed algorithm. The basic NCS simulators are commonly implemented by extending an existing simulator with a network or dynamic simulation extension, and as such the extension are usually not as versatile as the main simulator. The most common approaches are extending ns‐2 or Simulink. The advantage of these extensions is that the network and control simulation is done within the same tool, but the disadvantage is that the simulator may not be equally suitable for both network and control simulation. Additionally, the network or control extension must be developed from scratch, which usually leads to simplistic and inaccurate mod‐ els. Most of the network simulators have no control or dynamic simulation mode to enable reasonable WNCS simulation. Therefore a variety of extensions to these simulators have been implemented. Ns‐2 [118] is the de‐facto standard network simulator in the communication research community. It is flexible and can be extended by new classes written in C++. For ns‐2 there exists some dynamic system simulation addons, for example the Agent/Plant extensions [17], [152], or the more general NSCSPlant and NSCSController classes [5], which define agents with dynamic properties in the form of ODEs to simulate the process and controller. The process output is sampled with an, optionally adaptive, schedule and packets with the measure‐ ment are sent to the controller. The downside is that complex control system logics are difficult to realize with differential equations. The Scatterweb applica‐ tion programming interface has been added to ns‐2 to enable running sensor node executables in ns‐2 [167]. Simulink has been extended to simulate WNCSs by creating Simulink blocks that simulate the network. There exist many Simulink network simulation blocks, e.g. [53]. One of the first wireless network blocks developed, is an S‐ function that implements the IEEE 802.11b DCF (Distributed Coordination Function) [31]. It has a frame level correlated channel model, which models indoor, non line‐of‐sight environments. Another Simulink based network simu‐ lation blockset developed at the University of Michigan is capable of simulating Ethernet, ControlNet, and DeviceNet networks [93]. The networks are modeled by theoretical communication times calculated in [91]. Perhaps the most well‐known Simulink network blockset is TrueTime, which is actively developed at Lund University, Sweden [22]. It supports many network types (Wired: Ethernet, CAN, TDMA, FDMA, Round Robin, and switched
70
Ethernet, and wireless networks: 802.11b WLAN and IEEE 802.15.4) and it is widely used to simulate wireless NCSs [7]. TrueTime simulates only the physi‐ cal and MAC layers. Besides the dynamic system simulation offered by Simu‐ link, network node simulation includes simulation of real‐time kernels. The user can write Matlab m‐file functions that are scheduled and executed on a simulated CPU. Even ultrasound network (from version 2.0), and node battery simulation are included. The Ad Hoc On‐demand Distance Vector (AODV) routing protocol [126] has been implemented on TrueTime by appropriate functions running on the simu‐ lated kernels [24]. The simulation of mobile robots, including the physical robot model and an inter‐robot communication protocol, has been implemented for study of robots in simulations and comparing them to real robots [182]. In re‐ cent work, the simulation of a WirelessHART network is made possible by an extension of TrueTime [13]. The simulation uses frequency hopping and TDMA MAC protocol, but time‐synchronization is not simulated and assumed to be perfect. The developed WirelessHART network block has useful features with input ports with which one can specify beforehand the radio interference and packet drops. With these, one can study the impact of packet drop at instants critical for the control loop. The device table, routing and communication sche‐ dule are specified by the user, so no network manager functionality is imple‐ mented. The NMLab co‐simulator combines ns‐2 and Matlab [63]. The control system tasks are defined with Matlab scripts and callbacks. The approach is scalable, as it is easy to duplicate control loops using the Matlab scripting language, but complex dynamic systems might be difficult to implement compared to using Simulink. Ns‐2 and Matlab are synchronized, such that Matlab commands ns‐2 to execute to a specific time whereupon a new event is scheduled. The wireless node operating system simulators TOSSIM, COOJA, and RTNS (Real‐Time Network Simulator) are worth mentioning. They do not specifically support control system simulation, but complete wireless applications can be simulated with these tools. TOSSIM and COOJA simulate the code execution on the wireless nodes and have simple radio models to allow simulation of many nodes communicating with each other. Both sensor node simulators use sim‐ plistic range‐based network propagation models. RTNS [122] is a simulator for real‐time wireless node operation systems. It simulates the scheduling of tasks using RTSim (Real‐Time operating system SIMulator) and the network using ns‐2. TOSSIM [89] is a simulator for TinyOS [154]. With TOSSIM, whole networks consisting of nodes running TinyOS can be modeled. The actual application code is executed on the node simulator and the communication between the nodes is simulated on the bit level. Sensing and actuation are emulated with external code for the analog read and write operations. Simulation of wireless
71
control systems can thus be done by implementing suitable read and write functions. COOJA is a cross‐layer simulator for the Contiki node operating system [38], implemented in Java. COOJA combines the simulation of code execution, radio transceiver, network, and operating system into one tool [186]. Simulation of physical processes can be achieved by developing plug‐ins representing process models that can be attached to the simulated input/output interface of the nodes. The nodes are either run as compiled code on the host CPU, or a TI MSP430 emulator, but also simpler Java node models can be used, and com‐ bined in the same simulation, which makes the simulator both accurate and scalable according to the user needs. Another sensor network simulator is WISENES, which can simulate sensor network nodes with communication and application scheduling and takes into account energy, memory, and processing power consumption. The configura‐ tion is done with a high‐level Specification and Description Language, and the results are presented in GUIs and trace‐files [83]. Other extended simulators include Ptolemy II and Arena/ns. Ptolemy is a dis‐ crete event simulator, with emphasis on simulation of heterogeneous, hierar‐ chical and asynchronous systems. It has, for instance been extended to simulate distributed detection with sensor networks, but is no longer developed [10]. Arena is a tool aimed at simulations of mobile multi‐robot scenarios. It is ex‐ tended by integrating ns‐2 for inter‐robot communications [175]. Arena pro‐ vides mechanisms for sensor reading and motor control command implementa‐ tion in the simulator. Similarly to PiccSIM the positions of the robots are syn‐ chronized between Arena and ns‐2. The simulator is only suitable for mobile robot scenarios and the simulation is done in real‐time, neglecting synchroniza‐ tion issues (see Section 4.3.3). More advanced WNCS simulators are hybrid: they combine two simulators by integrating a network and a dynamic system simulator. The advantages are that relevant, existing, powerful, and well known tools for both network and control simulation are used. A caveat is that it may be difficult to properly integrate two simulation tools and produce correct results. The most relevant co‐simulation tool for WNCS simulation besides PiccSIM, appears to be Modelica/ns‐2 [5]. It is a very similar platform developed at the Case Western Reserve University (USA). As in PiccSIM, the network simulation is done in ns‐2, but the plant dynamics and the control simulation are done in Modelica. Modelica is a general purpose dynamic system simulation software [107]. With a graphical modeling and simulation environment, such as Dymola [39] among others, it corresponds to Simulink. In Modelica/ns‐2, both simula‐ tors exchange information with each other to synchronize the simulation of the system in both control and network domains. The simulation is controlled by
72
ns‐2, and Modelica is instructed to run until a certain time, upon which data‐ synchronization (copying values to be sent to ns‐2 and received values to Mod‐ elica) is performed. The packet rates, sources and destinations are specified in the TCL (Tool Command Language) script for ns‐2 before simulation, and thus no event‐based communication, for example depending on, possibly unfore‐ seen, events, such as alarms or threshold crossings, in the dynamic model, can be done. In PiccSIM, the traffic is generated in Simulink, and event based transmission of packets is possible. This enables simulation of event‐ or self‐ triggered control. Optimized Network Engineering Tool (OPNET) is a commercial package for general purpose detailed simulation and analysis of many different networks [26], [119]. It is widely used and generally regarded as one of the best network simulator packages. It is more advanced than ns‐2, as it among other things supports simulation of the physical link and the antennas, and has better confi‐ guration and visualization capabilities. OPNET can be customized using the Proto‐C language, but dynamic system simulation is not easily done. Regarding WNCS simulation, the effect of sampling interval, data rate, node movement and routing algorithm on several different plant models have been investigated with OPNET [61]. The main result was that with a higher network data rate, the radio range is decreased and more hops are needed to reach the destination, which resulted in that the more real‐time demanding plants could not be controlled. OPNET has been integrated with Simulink to simulate a two‐ pendulum WNCS, including both a simulated and a real wireless network, with a remote controller [62]. The simulator by Soglo [146] combines ns‐2 with Matlab using a C/C++ inter‐ face. The plant and control algorithms are implemented in C‐code or with Mat‐ lab m‐files. Matlab executes them when called through an external interface program, by ns‐2. A special UDP packet format is implemented to carry control data in the simulation. The presented results focus only on the network perfor‐ mance. Performance measures are given with wired links as a function of bot‐ tleneck bandwidth, number of processes, and of sampling intervals. No control related results are presented. In choosing the network simulator, it is important to evaluate the simulation objective, accuracy of the simulation result and the simulation efficiency. With simple simulation models using delay distributions or packet drop probabili‐ ties, the simulation results are not accurate, but for a number of cases the result may be adequate. An example is the case of investigating the control robustness during packet drops, where the drop rate is the factor under investigation. Several studies have been done to compare and evaluate the simulation results and accuracy of ns‐2. In the wired case, ns‐2 and OPNET give similar results compared to a real testbed network [100]. In the wireless case, however, the
73
simulation results seem to diverge significantly between different simulators [133]. This is mainly due to different assumptions and simplifications in the environment, signal propagation and radio models, in modeling the real wire‐ less network [79]. The current PiccSIM simulation results are as accurate as any other simulation based on ns‐2. When evaluating control results in Chapter 5, the qualitative network performance is more important than the quantitative: patterns of packets drops rather than the average packet drop. To compare the existing co‐simulators, Table 1 summarizes the main properties of the relevant simulators. For evaluating a complete wireless control system application, accurate network and control system simulation models must be built. This rules out TrueTime as it has simple network models. OPNET, Agent/Plant, Areena/ns and the node simulators are either not suitable as they do not support control systems well. Viable alternatives for WNCS simulation are only the co‐simulators PiccSIM and Modelica/ns‐2. Finally, PiccSIM has the advantage over all the other simulators that it offers control and network design tools. Table 1. Comparison of simulators for wireless networked control systems.
Simulator
Type
PiccSIM
co‐ Simulink, No simulator ns‐2
Yes
Yes
Yes/Yes
Modelice/ ns‐2
co‐ Simulink, Yes simulator ns‐2
Yes
Yes
No/Yes
TrueTime
control
Simulink
No
Yes
No
Yes/Yes
OPNET
network
No
No
Yes
Yes/Yes
Cooja, TOSSIM, RTNS
node simulator
Yes
No
No
Yes/Yes
Agent/Plant
network
Yes
Limited
Yes
Yes/Yes
Arena/ns
co‐ Arena, simulator ns‐2
Yes
Yes
Yes
Yes/Yes
74
Based on
ns‐2
Free
Control Advanced Event‐ supported network /Time‐ models driven
4.3. PiccSIM Architecture The general architecture of the PiccSIM platform is depicted in Figure 19. The PiccSIM simulator consists basically of two computers on a local area network (LAN), with access from the Internet: the Simulink or xPC Target computer for system simulation, including plant dynamics, signal processing and control algorithms, and the ns‐2 computer for network simulation. An example of a wireless control system simulation with network and control co‐simulation is depicted in Figure 20, where the simulation domains are indicated using green for the network and blue for the control domain in Figure 19. The technical details are explained in the next subsections. The Simulink model can either be run normally in Simulink (free‐run) or in real‐time with the Matlab xPC Target real‐time operating system. The Simulink mode is for pure simulations, where the time is synchronized between Simulink and ns‐2 (see Section 4.3.3), and the xPC Target mode for hardware‐in‐the‐loop runs, where a real process is run over a simulated network in real‐time. This mode is used for educational purposes in student lab exercises. The xPC Target computer runs a compiled version of the Simulink model, in real‐time using a Matlab proprietary real‐time operating system, where the control algorithms and nodes are modeled. The xPC Target computer has an I/O board to connect it to the real process for measurements and actuation. The network is simulated in PiccSIM by the ns‐2 computer. Packets sent over the simulated network are routed through the ns‐2 computer, which simulates the network in ns‐2 according to any TCL script specification generated auto‐ matically by the network configuration tool (see Section 4.4.2). In free‐run simu‐ lations, simulation time‐synchronization is performed between the computers. The integration of the simulators is explained in more detail in the next subsec‐ tions. The server computer is responsible for the remote user connections and running the PiccSIM Toolchain and Simulink models during normal simulation (free‐ run). In case of using the xPC Target computer, Simulink models built by the user are automatically compiled using the automated code generation capabili‐ ties of Matlab to executable code (rapid control prototyping) and uploaded to the xPC Target where it is run. For the remote user interface (Section 4.5) the server stores simulation results in a database for later retrieval. The PiccSIM server computer is attached by a LAN to a gateway, such that users on the Internet can connect to the system and operate it. In the following subsections, some features and implementation details about the simulator are presented.
75
Local User Interface PiccSIM Toolchain
Remote User Interface MoCoNet GUI
PiccSIM Architecture
Internet
Server, DB Configuration and management
Ns‐2 Network Simulator
Simulink or xPC Target Controller
Simulation Sensors
I/O Board
Process
Figure 19. PiccSIM architecture with control and network simulators, con‐ nection to hardware and user interfaces. Modified from [45].
xPC Target/Simulink yr
PID
Reference
Controller
output
control
Actuator
Plant
out
Sensor
in
Ns‐2 Wireless Network Figure 20. Wireless control loop split into dynamic and network simulation domains.
76
4.3.1. Simulink and ns‐2 Integration The PiccSIM simulator is created by integrating two different simulators: Simu‐ link for control system simulation and ns‐2 for network simulation. Communi‐ cation over the simulated network is done with UDP packets, since in control system applications a lightweight container for a small amount of data is more suitable than TCP. Packets sent over the network in the simulation model are routed through the ns‐2 computer, which simulates the network according to a TCL script specification. The simulated communication over a network commences by Simulink sending an UDP packet to the ns‐2 computer. The ns‐2 computer captures the UDP packets from the LAN (with so called taps) and injects them into the simulated network model. If the packet reaches the destination in the simulated network, it is sent back to Simulink. The corresponding Simulink UDP receive block captures the packet, converts it to a Simulink signal and outputs it immediately to the rest of the simulation model. Figure 21 shows the connectivity mapping between the Simulink and the ns‐2 nodes. UDP port numbers are used to asso‐ ciate packets to the corresponding node in ns‐2. The communication over the simulated network is handled in Simulink by a ready‐made library of blocks, as explained in Section 4.4.1.
Figure 21. Simulink and ns‐2 integration. Communication between the si‐ mulators: data packets, information updates, and time‐synchronization.
77
The integration of the control and network simulators is such that transmitting and receiving packets from/to Simulink is equivalent to communicating over a real network. In practice, at the time‐instant when a packet is sent, it is instantly (before the next simulation step) transferred to the network simulator. When a packet reaches the destination node in the network simulator, it is received in Simulink at the closest following time‐step. This ensures that the connection between the control and network simulators is as transparent as possible. Thus, the simulation is as accurate as the quantization imposed by the time‐steps of Simulink implies. Reducing the Simulink time‐step decreases the timing error due to the integration of two simulators. Compared to an actual implementa‐ tion with real hardware, inaccuracies only occur in the precise timing when the packet is sent, where the preparation of the packet, going down the network stack, and other scheduled operations interfere with the timing. The transfer between Simulink and ns‐2 does not take any time, since the simulations are time‐synchronized and the transfer occurs before the next time‐step is taken. The capabilities of the current ns‐2 version (v2.34), have been extended to suit the requirements of the PiccSIM simulator. A new scheduler to ns‐2 was devel‐ oped, for synchronization of the two simulators. In normal simulation mode (free‐run), the simulation time is synchronized between the simulators, as de‐ scribed in Section 4.3.3. In real‐time operation with the xPC Target, the network simulator uses the emulation mode of ns‐2 known as NSE (Network Simulator Emulator) to run in real‐time. Other developed new features are the dynamic data update mechanism (Section 4.3.2) and packet drop models (Section 4.3.4).
4.3.2. Data Exchange Between Simulators Since PiccSIM is an integration of two simulators, they are by definition sepa‐ rated. To close the gap between the simulators, a data exchange mechanism is implemented. This data exchange passes information from one simulator to the other. This enables the simulation of cross‐layer protocols that take advantage of information from the other application layers. An example of data exchange is with mobile scenarios. Ns‐2 supports node mobility, but natively only with predetermined or random movement. There exists, however, many applications, such as mobile robots, search‐and‐rescue, exploration, tracking and control (see Section 4.7.1), or collaborating robots (see Section 4.7.2), where the control system or application determines the node movement in run‐time, e.g. [P4] and [P5]. In these cases the controlled node positions must be updated from the dynamic simulation to the network simula‐ tor. The updated node positions are then used in the network simulation, and affect for instance the received signal strength and changes in the network to‐ pology may further initiate re‐routing. The data exchange mechanism is used in this thesis to update the node posi‐ tions in simulations in Sections 4.7.1 and 4.7.2, where the simulated movement
78
of a node is updated from Simulink to ns‐2 [130]. The node ID and x‐y position is transmitted with a user specified time‐interval to ns‐2 by a ready‐made block (see Section 4.4.1), which updates the node position in ns‐2, illustrated in Figure 21. Besides position information, other data updates are also possible, both from Simulink to ns‐2 and vice verse.
4.3.3. Simulation Clock Synchronization To generate correct simulation results, the integrated simulators must be syn‐ chronized in time. This is accomplished by the data exchange mechanism and a new scheduler for ns‐2 [P3]. Previously both simulations were run in real‐time, enabling control of a real plant over a simulated network. This feature is still available with the xPC Target simulation. Running in real‐time, however, caused doubts on the correctness of the whole simulation, since accurate syn‐ chronization could not be guaranteed. Using time‐synchronization, slightly different results are obtained compared to running in real‐time, indicating that synchronization makes a difference. Other reported work suggests that the real‐ time simulation of ns‐2 is not accurate, due to simulation clock inaccuracies and scheduling problems [101]. The benefits of time‐synchronization between the simulators are that the simu‐ lations do not need to be run in real‐time, so the simulation takes less time. The results are more accurate, because synchronization ensures that both Simulink and the network simulator are at the same time‐instant. This furthermore re‐ moves the minute time‐delay caused by the LAN connecting the Simulink and ns‐2 computers. The time‐synchronization scheme is built upon the data exchange framework presented in the previous subsection and works as depicted in Figure 22: Simu‐ link sends ns‐2 a packet, which contains the current simulation time. Then ns‐2 simulates the network up to that time, replies to Simulink, and waits for a new synchronization packet. Upon receiving the reply from ns‐2, Simulink will advance one time‐step in simulation and send a new synchronization packet to ns‐2. Communication packet and data exchange are performed before the syn‐ chronization and clock advancement. The accuracy of the integration is that of the Simulink time‐step, such that packets returned by ns‐2 are received by Si‐ mulink on the next time‐step. The accuracy can be improved by decreasing the maximum time‐step of the Simulink solver. External time‐synchronization of ns‐2 from Simulink is enabled by modifying the ns‐2 scheduler. Enabling the time‐synchronization mechanism needs minor changes in the ns‐2 configuration script and in the Simulink model. The Tool‐ chain automatically generates the additional TCL code and the user must add a ready‐made block called “Synchronize with ns‐2”, which is included in the PiccSIM library, into the Simulink model.
79
Ns‐2
Simulate to specified time
Simulink
Simulate one time‐step
Figure 22. Simulink and ns‐2 simulation time‐synchronization messaging. Exchange of packets to and from the simulated network shown with dashed arrows.
4.3.4. Other Implemented Features There are some other implemented special features in PiccSIM worth mention‐ ing that are used in this thesis. Often it is desired to compare the obtained WNCS simulation results with the case of a perfect network, to evaluate the degradation in control performance due to the network. For this purpose a super‐network feature is added to ns‐2, where the network simulator returns the packet immediately to Simulink with‐ out injecting it to the network simulation model. Using the super‐network fea‐ ture, the same PiccSIM simulation model can easily be simulated with or with‐ out a network only by flipping the option. In order to study the effects of different parameter values a batch run feature is developed in PiccSIM. Through user defined scripts, any value in the simula‐ tion, either on the network or control side, can be varied, and several simula‐ tions performed automatically. The specified results of the simulations are stored in vectors for analysis. This tool allows for easy survey of the impact of different parameters, be it controller parameters or network properties. The batch run feature is used in simulations presented in Sections 4.7.4 and 5.4. The radio environment models in ns‐2 are quite simple [118] and thus two addi‐ tional propagation models are implemented into ns‐2. For indoor simulations, an indoor fading model is integrated into ns‐2, and for simulation of any envi‐ ronment, a data‐based packet drop model is made. The indoor propagation 80
model extension to ns‐2 takes into account the shadowing from walls. This extension makes more realistic indoor simulations than the default ns‐2 propa‐ gation models, since the attenuation of the walls according to a real building are taken into account [P3]. The extension of ns‐2 reads a file containing signal attenuation values for every node pair in the network. The values are calculated with a multi‐wall model, as explained below. The extension allows one to use other attenuation values, for example real measurements from a factory. A similar model is presented in [29], where the signal propagation is modeled based on a blueprint and the attenuation model is integrated into the OMNeT++ network simulator. Signal strength measurements from the site can be inte‐ grated into the model. The implemented multi‐wall model takes into account the walls located directly between the transmitter and the receiver and allows for individual wall materi‐ al properties. The wall attenuation values are selected based on statistical re‐ sults from measurements. The selection of proper path loss exponent, which determines how much the signal is faded by distance, is crucial because the value is highly dependent on the type of building or structure of the indoor environment. In an office environment with walls and furniture, the value is usually between 3 and 6 [P3]. The calculation of wall attenuations requires description of the indoor scenario. The simulator takes a simplified grayscale picture of the environment, the building blueprint, as an input. This picture portrays different wall materials with different colors. The corresponding attenuation factors are defined for each color in a table. The simulator calculates the attenuation of the transmitted power in every pixel in the given blueprint. The losses due to walls are added to the overall path loss, which can be any of the default ns‐2 propagation mod‐ els. An example is illustrated in Figure 23, where the coverage prediction of the simulator and the error compared to real measurements are shown. About 75 % of the simulator values differ from the real values less than 7.5 dB. The wall propagation model is based on a blueprint or measured signal attenu‐ ation values. It is based on the average signal conditions and it does not take into account the actual packet drop dynamics. To better model the communica‐ tion channel, a Gilbert‐ Elliot model (Section 3.2) can be used. A Gilbert‐Elliot model is implemented into ns‐2, which has separate models for every link pair. G‐E models based on measured data for every link pair can be specified and ns‐2 will simulate the packet drops according to the Gilbert‐Elliott model. In the building automation simulation in Section 4.7.3, both the wall propaga‐ tion and the Gilbert‐Elliot packet drop models have been used. The measure‐ ment procedure and obtained G‐E models are shown in Sections 3.1 and 3.2, respectively. In this thesis only the simulation results with the G‐E model are presented, as the simulation results are very similar with either model.
81
< -100 dB
25
-96 dB
20
-90 dB
15
-84 dB -78 dB
10 5 0
-72 dB
-5 -66 dB -60 dB
-10 -15
-54 dB
-20
-48 dB
-25
Figure 23. Coverage prediction (left) and error compared to measurements of the real building (right) in dB.
4.4. PiccSIM Toolchain The PiccSIM Toolchain ties the design, implementation and simulation of WNCSs into one integrated package, where all the functionalities of the PiccSIM simulator are available through a graphical user interface. It combines several tools for designing, simulating and implementing wireless control systems. PiccSIM is unique among the WNCS simulators, because it enables the design and simulation of wireless control systems in one framework. There are other similar NCS simulators, see Section 4.2, [5], [31], [62] and references therein, but they do not for instance deliver any control design support. The designed and simulated controllers or any generic algorithms can further be implemented on actual wireless nodes with the automatic code generation tool as explained in Section 4.6. Thus, any distributed application can first be designed and simulated in PiccSIM, and later tested on real hardware, without extra programming work. The Toolchain runs as a Matlab GUI. It consists of tools for generating the ns‐2 configuration file with the GUI, automatically adjusting controller parameters (through tuning rules or algorithms), identifying process transfer functions and automatic generation of embedded code for wireless nodes. Next the Toolchain architecture is presented, followed by the different network and control system design tools. With the Toolchain, both the network and the control simulators are managed, by starting and stopping them at the same time with a button click. This hides the complex networked control system co‐simulation behind one GUI, leaving the user the full capability to specify the simulation model.
82
4.4.1. PiccSIM Block Library The PiccSIM library, shown in Figure 24, is a set of Simulink blocks that add wireless communication capabilities to any Simulink model, for example to construct a networked control loop. The communication over the ns‐2 simu‐ lated network is handled by the ready‐made node blocks, such that the user need not pay attention to the integration of Simulink and ns‐2. The library con‐ tains Controller and Process blocks to create control loops and wireless node blocks to create wireless communications between parts of the control loop. The Controller and Process blocks are replaced by the correct implementation by the PiccSIM Toolchain. However, they can be edited to allow custom implementa‐ tion of the controller or process models. [P3] Timestamps Node Send to T 1 N 1 ID = 0
Timestamps Node Send to T 1 N 1 ID = 0
Data T 1 N 1
Node Data T 1 N 1 ID = 0
Data T 2 N 1
Data T 2 N 1
Node
Node send only
Node receive only
Node
do { ... } while
Position
Data N 0
data
ID = 0 Synchronize with Ns-2
Dummy node
Hybrid Trigger
Timestamp
Radio hybrid trigger
Timestamp
Send position to ns-2
Receive Trigger
Send data to ns-2
Out1
In1
Radio event trigger
Triggered Subsystem
y_ ref Controller
u
Receive data from ns-2
Util
DA 1
Process
AD 1
Process
Radio recv 1
y Controller
PiccSIM library v1.0 Mikael Björkbom, 8/2010 Aalto University School of Science and Technology Copyright (C ) 2010 under GNU GPL
Radio send 1
Controllers
Generic node y To scope
Figure 24. PiccSIM Toolchain library of Simulink blocks. Blocks for sending and receiving over a network; controller, process model and generic node blocks; additional blocks for displaying signals in a scope, simulator time‐ synchronization; data exchange blocks; implementing radio triggered logic; collection of utility blocks and controller implementations for PiccSIM.
Constructing a wireless control loops with the PiccSIM library blocks is easy. In Figure 25 a simple example of a control loop with wireless measurements is
83
shown. Before simulation, the network nodes need to be configured with the source and destination IDs of the communicating nodes, and the data types and dimensions of the signals, using the dialog shown in Figure 26. The blocks sup‐ port both event based and periodic transmission. The conversion to UDP pack‐ ets and back to Simulink signals is done in network node blocks. Since the in‐ formation transmitted over the network is actually included in the packet, the choice of data types and information sent is reflected in the packet size and network simulation results, as it would be in a real system. The output of the received signals is held until a new packet arrives. Whether a new packet has arrived can be found out by observing the timestamp port. [P2] The library includes, for the remote user interface, a block for logging Simulink signals into the database for later retrieval and displaying the signals in a scope in real‐time during simulation [P1]. The library also has a block for sending dynamic data to ns‐2 as explained in Section 4.3.2, which is used for the mobile node simulations in Sections 4.7.1 and 4.7.2 [P3]. The PiccSIM library contains a generic node block, which is used in automatic code generation to create an implementation of any algorithm defined with Simulink blocks and execute the same algorithm on real node hardware. This block can implement any computational algorithm, whereas the Controller and Process blocks are specialized for control systems. Naturally the limited memo‐ ry and computational resources of the hardware sets a constraint on the imple‐ mentable algorithms. The generic node block supports reading from and writ‐ ing to analog inputs and outputs and communicating user defined signals over the radio. The generic node block and the automatic code generation are ex‐ plained in more detail in Section 4.6. [P3]
4.4.2. Toolchain User Interfaces The main graphical user interface of the Toolchain is shown in Figure 27. In the GUI, the Simulink model and TCL script for the network is selected, and the controls for running simulations are available. The GUI provides access to the PiccSIM block library, and the other design and simulation tools, such as net‐ work setup, controller tuning, and code generation. [P3] In the network settings window shown in Figure 28, configuration scripts for the network simulator ns‐2 are created with a user‐friendly GUI, where the user can specify the settings of the ns‐2 network simulator, including the node posi‐ tions, network protocols and simulation parameters, and PiccSIM related simu‐ lation settings. The settings include the propagation model, routing and MAC protocols, node movement pattern, node connection pattern, etc. It is also poss‐ ible to create additional simulated traffic. The generated script can also be edited by hand for custom simulation settings. The network specification script
84
Out
Reference yr Timestamps
Node
y
Controller u PID
Controller
ID = 1
Node Process
Send to N 1 T 1
ID = 0
Process
Data N 0 T 1
Sensor node
Controller node
Figure 25. Example of a simple wireless control loop with controller, process and blocks transmitting and receiving for wireless process output data.
Figure 26. Node block configuration dialog for specifying communication properties, such as packet payload data types and node to communicate with.
85
is automatically loaded to the ns‐2 computer before each simulation, so that the current network configuration is used. With the controller tuning tool shown in Figure 29, the controllers are designed for the respective processes. Processes are modeled as transfer functions with delays, or if more complex processes are needed, any custom process can be created using Simulink blocks. Supported controller types are mainly PI, PD or PID type and they can be tuned automatically using several tuning methods presented in [47], which are suitable for networked control systems. One of the implementing tuning rules is the jitter margin based PID controller tuning rule described in Section 2.6.1. If other types of controllers are needed, the controller block can be customized manually.
Figure 27. The Toolchain main graphical user interface window gives access to the control and network models, and management of the simula‐ tion.
86
Figure 28. Network settings window showing node locations.
Figure 29. Controller tuning window with automatic tuning methods based on process model suitable for wireless control systems.
87
4.5. Remote User Interfaces A remote, virtual, or web laboratory is a system which enables the user to run a laboratory experiment and view the results remotely, for instance with a web browser. The main use of the remote laboratory is educational: to enable stu‐ dents running laboratory experiments from home. Remote laboratories are used in education to enable flexible hands‐on experience and resource sharing. An example of an often used laboratory process is the inverted pendulum [138]. For a review on the history, role, objectives, benefits and impact of educational remote laboratories, see [49]. There are a large number of remote laboratories developed in universities around the world. The field is still developing and there are not yet any stan‐ dards for remote laboratories, and every lab is implemented in a different way [58]. No survey of remote laboratories is attempted here, since it will be imposs‐ ible to cover them adequately. A survey of remote laboratories and the used technologies can be found in [58], where the future challenges and development objectives are listed. The remote user interface developed in this thesis is the MoCoNet platform. A similar remote laboratory setup is reported in [121]. The MoCoNet remote user interface is a Java applet that runs on a Java enabled browser. The remote inter‐ face allows students to select controllers, adjust tuning parameters, and simu‐ late and run the process. The web interface is shown in Figure 30 and the scope for viewing the results of the run is shown in Figure 31. The scope displays a simulation run of a wireless control system shown in Figure 25. Additional traffic is simulated in the middle of the run at t = 20 – 40 s. Disturbances due to the extra traffic is seen in the control signals. The signals are plotted in real‐time and stored in a database for later retrieval. Previous experiments are saved in a database by the MoCoNet server and the user can load these for later inspec‐ tion. [P1] The MoCoNet interface is extended to the PiccSIM Researcher’s remote inter‐ face to enable remote usage of PiccSIM. The remote researcher’s interface is implemented as a special version of the MoCoNet interface, with options suita‐ ble for research work on PiccSIM. The PiccSIM Researcher’s Interface allows other researchers to upload custom simulation models (Simulink model and ns‐2 TCL script), and to run simulations on PiccSIM, thus serving a larger group of researchers for WNCS simulations.
88
Figure 30. MoCoNet – the PiccSIM remote user interface.
Figure 31. Scope for remote user interface, showing control and output re‐ sponse of process, and communication delay. Additional traffic is simu‐ lated between 20 to 40 seconds.
89
4.6. Automatic Code Generation and Implementation For implementing the controller or generic node algorithms on actual wireless node hardware, the PiccSIM Toolchain automatically convert the algorithms in the Simulink model to C‐code. This allows one to test the designed and simu‐ lated system on real hardware, without the laborious and error‐prone task of implementing the same algorithms on the target platform. With the code gener‐ ation capabilities it is relative easy to compare simulation results with the real performance, as no extra coding effort is needed. Comparison between the simulated and real performance has, however, not yet been done. The code is generated with the Matlab Target Language Compiler (TLC) and Real‐Time Workshop Toolbox. The TLC constructs the code according to a TLC template with code instructions for the Simulink blocks. The generated code of the block contents is combined with a TLC generated main file, containing the framework for running the algorithm on the nodes. The wrapper code executes the computation task, reads and writes to the inputs and outputs of the node hardware, and takes care of the transmission and reception of packets. The main code template is currently compatible with Sensinode Micro U100 series [142] wireless nodes, but it can be easily modified for different hardware. The complete code is compiled and programmed, on an UNIX operating system or with Cygwin in Windows, to the node with the operating system FreeRTOS [52] and communication stack NanoStack‐1.0.3, developed by Sensinode. The Sensinode nodes communicate with each other using an IEEE 802.15.4 based radio. The same radio model is available and used in ns‐2 for simulations. The generic node block supports reading and writing from/to analog interfaces and sending and receiving packets with the radio. With the code generation tool, hardware specific options related to input/output voltages and resolutions, can be specified. The transmitted packet format and data types are the same as the ones used in the simulation model with the simulated network. An applica‐ tion modeled in Simulink can thus be automatically implemented on sensor node hardware. This not only enables testing control applications, but also numerous sensor network applications. The automatic code generation feature is demonstrated with two examples in Section 4.7.5. All models built with Simulink cannot be compiled to run on the node hard‐ ware due to various restrictions of Matlab and the PiccSIM Toolchain. The main restrictions are: Code generation is only available for the blocks specified by the Real‐time Workshop Toolbox; The algorithms are limited by the memory and computation capabilities of the hardware; Receiving and transmission of only one type of packet is possible; The order of the phases is fixed to: receive, then computations and I/O, and finally transmit.
90
Figure 32. Window for automatic code generation.
4.7. Simulation Case Studies In this section some PiccSIM simulation studies are presented. The simulations have been used to develop PiccSIM by testing different simulation scenarios, and to obtain a better insight into the behavior of WNCSs. These simulations show the capabilities of PiccSIM and they also provide an understanding on how several networked control loops interact. Although all cases are control system simulations, other distributed computation applications, for example consensus algorithms or sensor networks applications, can also be simulated. The IEEE 802.15.4 radio is always used in the PiccSIM simulations here and in Chapter 5. The first two scenarios are related to mobile robots, either a single or a squad of robots. They emphasis the packet routing performance, as the network topology changes when the mobile nodes move. The first simulation is a comparison of different routing protocols, whereas the second one also compares different control structures: the jitter margin tuned PID controller versus a Kalman filter and conventional PID controller. In Section 4.7.3, a heating and ventilation case of an office, based on wireless measurements, is simulated with the indoor models presented in Sections 3.2 and 4.3.4. Then, in Section 4.7.4 an industrial
91
case is simulated, where a crane in a hall is controlled over a wireless network and the impact of the network QoS on the control performance is shown in a more realistic case than in Section 3.5.2. Finally, the automatic code generation feature is demonstrated with two simple control cases. One is a laboratory‐scale heated airflow process and the other is a more demanding trolley crane anti‐ swing control, which requires real‐time operation. The final implementation cases are used to show how easy it is to develop wireless control applications aided with the PiccSIM design, simulation, and automatic code generation tools.
4.7.1. Target Tracking Scenario The target tracking scenario considers a grid of nodes forming a static sensor network and a mobile node or robot. The sensor network serves as an infra‐ structure network for transmitting measurement and control signals from/to the mobile node and providing a localization service. The objective for a centralized controller located at an edge of the infrastructure grid, is to control the mobile node according to a predefined track. On the control side a Kalman filter is used for filtering the mobile node position and predicting the position if the informa‐ tion is not available, due to packet drops. A PID controller is then used to con‐ trol the mobile node. The control signal is routed to the mobile robot, which applies the acceleration command. If no control packet is received for three consecutive sampling intervals, an automatic stop mechanism is triggered. [P4] The issue under investigation is on the network side whether singlepath or multipath routing is more advantageous in mobile scenarios. A similar scenario has been investigated with the TrueTime simulator, where the controller, robot hardware and communication protocol are implemented in Simulink [182]. An application using Kalman filtering for target tracking is presented in [113]. Nearby infrastructure nodes can measure their distance to the mobile node, for example by using ultrasound. The distances are transmitted to the controller. Using at least three distance measurements, the controller can determine the position of the mobile node by triangulation. By simulation it is noted that the requirement to receive three measurements from the same sampling interval is not always fulfilled. Hence the controller has to use data from older sampling instants for which more measurements have arrived. This time delay caused by intermittent data is plotted in Figure 33. Notice, that in this case the delay jitter is not caused by varying communication time in the network, but by the availa‐ ble information at the controller. A Kalman filter capable of fusing measure‐ ments with varying time‐delays (see [P7], but in this case without the delay estimation part) is applied to estimate the current robot location and filter the localization noise.
92
Table 2. Network and control performance metrics from target tracking case.
Average delay Routing overhead Packet loss [s] [%] [% ]
Control cost (ISE)
AODV
0.08
8.1
23
18
LMNR
0.001
0.5
10
8.6
# of distances in position estimation
8 6 4 2 0
0
50
100
150
200
250
150
200
250
Time [s] 4
Delay [s]
3 2 1 0
0
50
100 Time [s]
Figure 33. Number of distance measurements used in position estimation for target localization and induced delay when available measurements is less than the required minimum of three.
An IEEE 802.11 network is used. A comparison between a singlepath routing protocol, specifically Ad hoc On‐demand Distance Vector (AODV) [126] and a multipath protocol called Localized Multiple Next‐hop Routing (LMNR) [114], a multipath extension of AODV, is displayed in Figure 34, where the paths of the remotely controlled robot are shown. The numerical results are in Table 2. The simulation results show that the multipath routing protocol has lower communication delay, routing overhead and packet losses then the singlepath
93
routing protocol. There are additionally shorter communication outages and the number of automatic stop instances of the mobile node is low, whereas using singlepath takes a considerable time before a new path is established. An ex‐ ample of this is can be seen in the upper left corner of the trajectory in Figure 34. This simulation shows that multipath is advantageous in some mobile scena‐ rios, since it can quickly switch to a backup route (see next section for a counter‐ example). The control is satisfactory with both routing protocols during normal operation, as the network performance between the routing outages is the same. The target position is time‐dependent, with a total time for the whole trajectory of 250 seconds. Thus, when a communication break occurs, the mobile node is left behind. When the communication is restored, the controller moves the node straight to the current target position, as seen to the left in Figure 34. If pausing the target position when there is a communication break, the time to traverse the whole trajectory is for AODV 280 seconds and LMNR 270 seconds, resulting in a similar conclusion about the routing protocol performance. 1000
900
800
700
600
500
400 Reference trajectory Singlepath routing Multipath routing No contact
300
200 200
300
400
500
600
700
800
900
1000
Figure 34. Mobile node trajectory control with singlepath and multipath routing. Triangles indicate communication outages.
94
1100
4.7.2. Robot Squad with Formation Changes The Target Tracking scenario presented in the previous section is extended to a squad of 25 wireless mobile robots. The squad consists of a leader, which con‐ trols the movement of the rest of the group. The target scenario is an explora‐ tion or search‐and‐rescue type of situation, where the squad moves in different formations, depending on the environment or requirements of the task. Several formation changes are done, causing changes in the network topology. In this case the control and network interactions are clear: the controlled mobility changes the network topology, which causes rerouting. Vice versa, the rerout‐ ing performance determines the network availability for control. The main objective is to evaluate routing protocols and control architectures in a scenario with harsh network conditions. This case has been studied in [130] and [P5]. Compared to the previous simulations, the infrastructure nodes are removed and each robot can localize itself, for example using GPS or inertia measure‐ ments. The robots send their position information to the leader robot. The lead‐ er then calculates the desired path and sends the control signals, taking into account collisions and the final desired formation. The position controller for each mobile robot is a discrete‐time PID controller. Several control structures are compared: A network aware PID controller tuned with the jitter margin method ((27) in Section 2.5, with δmax = 3h); a conventional PID controller (for comparison purposes tuned for no jitter margin, δmax = 0); and a Kalman filter used as a state‐estimator with a non‐network aware PID controller (δmax = 0), or a conventionally tuned controller. For comparison, the same tuning method is used for all the controllers, either taking into account the delay jitter of the net‐ work or not. The controller with zero jitter margin has higher controller gains and should give a better performance, but it is less robust to delay variation than the jitter margin tuned controller. The control performance is calculated with an integral of square error (31) between the desired and actual location, summed over all the mobile robots. Simulations of four formation changes, shown in Figure 35, of a squad of 25 robots are done. The differences on the network layer between singlepath and multipath routing, and on the control layer between the different controller structures are investigated. The results are compared to the case with no mobili‐ ty, where the nodes in the network do not actually move, and without a net‐ work, that is control with perfect communication. The network results are in Table 3 and all the control results (ISE cost function (31) and time to reach final formation) are collected into Table 4. The ISE cost is only calculated for the part without an outage, because the error during an outage would otherwise domi‐ nate the total cost and only correlate with the performance of the routing proto‐ col. Contrary to the previous case in Section 4.7.1, using singlepath routing is slightly more advantageous than multipath routing. The reason is that in multi‐
95
path routing, more link breaks occur in high mobility scenarios and switching to backup links are frequent, which may also be close to breaking. Singlepath routing takes longer to find a new route, but the links seems to hold longer, hence the differences in the NCC between the singlepath and multipath proto‐ cols. Since the routing is under heavy load, with frequent route breaks, a better performance might be achievable by flooding. [P5] Table 3. Network performance metrics from the robot squad simulations using a jitter margin PID tuned controller.
Avg. Routing Packet NCC (82) delay [s] overhead [%] loss [% ]
Packet drop fairness (85)
No mobility 0.009
0.8
0.1
1.6
2
Singlepath
0.015
3.2
30
1330
862
Multipath
0.09
11.2
20
381
398
60
60
24 15
23
8
14
22
3
7
13
21
2
6
12
20
1
60 5
11
19
4 50
10
18
9
17
50
30
L
[m]
[m]
40
20
10 40 0
10
20
6
40 3 30
L
1
10
20
9
14
19
24
40
3
8
13
18
23
30
2
7
12
17
22
20
1
6
11
16
21
10
L
5
10
15
20
60
0
10
0
3 20
30
L 50
17 8
30 [m]
19
10
16
40
50
60
6
9
12
15
18
21
24
2 605
8
11
14
17
20
23
1
7
10
13
16
19
22
4
20
22 13
40
40
23 10
5
30 [m]
50
24
0
20
60
20
7
0
21
9
4
10
18
14
12 10
2
20
15
4
0 50
9 8 7 6 5 4 3 2 1 L 11 12 13 14 15 16 17 18 19 20 21 22 23 24 0
20 11
50
[m]
40
[m]
60
0
16
30
[m]
0
50
10
50
60
0
0
10
20
30 [m]
40
50
60
Figure 35. Formation changes. The leader indicated by “L”. All robots start initially from almost the same location in the center at coordinates (30, 30).
96
Table 4. Control cost (ISE) and extra time to reach formation with different control configuration and network configurations.
No jitter PID
Jitter PID
Kalman filter + PID
Cost
Time [s] Cost Time [s] Cost Time [s]
Perfect com‐ munication
0.1
0
1.0
0
0.3
0
No mobility
2.3
61.5
3.3
10.5
1.4
0
AODV
2.7
84
2.9
15
1.9
5
LMNR
3.3
40
2.8
26.5
2.0
4.5
Using a state‐estimator with a conventional PID controller leads to a better control performance than using the network delay jitter tuned PID controller. State estimation, however, requires more computation. The non‐network aware controller has low control cost values, but there is a risk of it being unstable, contrary to the Kalman filter plus PID alternative, even if they have the same tuning. Without a network, the jitter margin controller is conservative com‐ pared to the other control structures. Contradictory to expectation, when taking the network into account, the more conservative controller performs relatively better: introducing the network has only a small effect on this control perfor‐ mance. This is a general observation: with a higher jitter margin the control is more conservative, but also more robust to the adverse effects of the network, yielding graceful degradation. The result can alternatively be compared by the time to reach the desired for‐ mation listed in Table 4. Using the KF and PID controller is better than the jitter margin tuned PID, and the non‐network controller fairs the worst. This shows that it is more advantageous to use a network aware control structure, even if the pure performance metrics may be worse. The robot squad scenario is furthermore evaluated with different sampling intervals or packet rates and using prioritization based on packet forward count. Using longer sampling intervals improves the network performance, but degrades the control results, thus there is a trade‐off between network and control performance [93]. Prioritization equalizes the network QoS between control loops and yields better overall control, similarly as in Section 4.7.3, where load balancing between several access points is used [P5].
97
4.7.3. Building Automation Scenario The Building Automation case is a heating, ventilation and air conditioning (HVAC) scenario. The office of the Control Engineering group at the Aalto University, Department of Automation and Systems Technology is used as a test case for wireless HVAC system simulations, similar to the cases studied in [76]. The layout of the office is shown in Figure 36 with a total of 39 rooms. The temperature and CO2 concentration of the office rooms, which depend on the occupancy of the room, are modeled using first principles [P4]. The network is a wireless IEEE 802.15.4 network, as it is suitable for building automation [76], using the AODV routing protocol [126]. Both the wall propagation model presented in Section 4.3.4 [P4] and the identified packet drop model based on real measurements as described in 3.1‐3.2 and [P6] are tested. Here, only the results using the measured packet model presented in Section 3.2 are shown, as the results are very similar. The measurements from the office prototype loca‐ tions are generalized to the whole building, as the rooms are more or less iden‐ tical. The paths between the nodes in the building are categorized according to the six prototype locations. As there are eight different path measurements for one prototype location, one of the path models is randomly selected for the node pair in the simulation model. Thus, spatial variation between similar links in the office is obtained. Wireless sensors in each room measure the temperature and CO2 concentration. This information, along with the desired temperature (set by the occupant) and the status of the lights, are sent to the central controller at the access point. Ad‐ Room
1: 3.9x4.5 = ID: (N‐S)x(W‐E)
Plaster wall Concrete wall Brick wall (50% glass windows)
Access point Wireless sensor
2 2x1.6 2 4: 4 .5 x4 .5
North
2 3: 3 .9 x4 .5 1 1x1.6
2 5: 2 .5 x4 .5
11 x1 .6 4 : 3.9 x4 .5
5 : 3.9 x4 .5
2 2: 3 .9 x4 .5
13 : 2.5 x4 .5
12 : 3.9 x4 .5
2 1: 3 .9 x4 .5
1 4: 2 .5 x4 .5 3 : 3.9 x4 .5
6 : 3.9 x4 .5
11 : 3.9 x4 .5
2 0: 2 .5 x4 .5
15 : 2.5 x4 .5
2 6: 2 .7 x4 .5
2 7: 3 .9 x4 .5
2 8: 2 .7 x4 .5
2 9: 2 .7 x4 .5 1: 4 .5 x3.2
2: 4 .5 x3 .2
4 .5 x3 .2
7: 4 .5 x4 .5
Elevator
WC
8 : 4 .5 x4
9 : 4.5 x6
1 0: 4 .5 x6
Kitchen
16 : 4.5x5 .5
17 : 4.5 x6
18 : 4.5x3
19 : 4.5x3
WC
4 .5x2
3 0: 3 .9 x4 .5
2x6 0
3 9: 8 x6
3 8: 8x6
37 : 6.8x7
Meetin g room
Meeting room 2
1 .8 36 : 6.8x3
35: 6 .8 x5 .2
Control Café 3 4: 6 .8 x5
Server 33 : 6 .8 x5
32 : 6.8 x5
3 1: 6 .8 x4
Figure 36. Layout of the office in the building automation case. Node posi‐ tions and wall materials indicated.
98
ditionally, presence event messages are sent to the command center when people enter or exit a room, which turns on/off the lights. The central control system coordinates the heating and ventilation of the individual rooms based on the wireless measurements. The local heating/cooling and ventilation com‐ mands are transmitted back to the rooms. The wireless network deals with both time and event‐triggered messaging, all communicated through the wireless gateway. This communication topology is similar to WirelessHART, where all the data is routed through the gateway. The centralized control architecture is justified, since it provides better capabilities of applying globally optimal control schemes. Because of the quantity of nodes, multiple hops, radio environment, and random access MAC, there are packet drops, which impair the control result. An appropriate sampling interval cannot be easily calculated in advance, since the throughput depends on the specific network, protocols, topology, and ap‐ plication generated traffic. In this case a sampling interval of h = 30 s with data quantization turned out in simulations to be the shortest obtainable without causing congestion. The average packet drop turned out to be 18 %, mainly because of the channel conditions and multihop communication. Hence the controllers need to be tuned to tolerate gaps in the measurements. The end‐to‐end delay is in this scenario on average 0.14 s, considerably smaller than the sampling interval. Thus, only packet drop and outage lengths need to be considered in the control design. PID controllers for the heating control are tuned with the extended plant approach [47], where the controller parameters are determined partially based on the desired jitter margin δmax, which is set to the temporal length of two consecutively dropped packets (δmax = 2h = 60 s). Conventional tuning methods are not applicable, since they fail to guarantee the stability due to lost measurements. Examples of the simulation results are shown in Figure 37, where the tempera‐ ture of one room is shown. The results are given for a PID controller tuned to be stable with either one or two consecutive packet losses. The response with the controller with the larger jitter margin is slower, but it is conversely less prone to oscillations during packet drops. The packet drop and network cost for control (82) for each room are shown in Figure 38a,b. The QoS is worse for the nodes multiple hops away from the access point. The control performance is evaluated with the ISE (31) cost crite‐ rion with respect to the desired temperature. The increase in control cost com‐ pared to the case without network (no packet drops) can be seen in Figure 39a,b which reflects the different QoS conditions. Evidently the performance of the control system depends on the network QoS, such that control performance of the far away nodes is limited by the network.
99
Room 18 24
1000
1
0
1 2
23
500
22
0
21
‐500
20 20
40
60
80
100
120
‐1000 140
Heating [W]
Temperature [°C]
1 4
Figure 37. Simulation results of building automation case. Temperature and heating in room 18. Comparing PID controller tuned with different jitter margins: allowing for one packet drop (dotted) and allowing for two con‐ secutive packet drops (solid). The number of people in the room indicated at the top. An initial time of 20 minutes to stabilize the room temperature is not shown here. [P6]
To improve the control results, the controllers of the far away rooms could be re‐tuned with a larger jitter margin. An alternative option is to add more access points and spread them out in the building. A higher bandwidth connection, such as wired Ethernet or WLAN, between the access points could then be formed. Such a hierarchical design increases the performance of the network, and hence, improves the control results. By using a hierarchical network (Figure 38c,d), the network QoS increases sig‐ nificantly: routing overhead, delay, packet drop and NCC listed in Table 5 are reduced. This results in better control and smaller control cost as depicted in Figure 39c, and is comparable to the case without a network, Figure 39a.
100
(a) Packet drops (#)
23 22
1
4
5
12
3
6
11
2 39
7 38
0
10
8 37
20
9 36
10 35
30
34
40
13 14 15 16
17
18
33
32
31
50
21 20
60
1
12
6
11
2 39
0
5
3
7 38
10
8 37
20
30
9 36
22
34
40
1
80
13 14 15 16
17
18
33
32
31
50
21 20
60
19
25 26 27 28 29 30
0
1
80
5
12
3
6
11
7 38
0.0625
0
8 37
0.125
36
10 35
0.188
34
0.25
17
18
33
32
31
0.313
21 20
0.375
19
4
5
12
21
3
6
11
38
8 37
0.125
0.188
9 36
10 35
0.25
34
23 22
13 14 15 16
17
18
33
32
31
0.313
20
0.375
25 26 27 28 29 30
0.438
24
7
0.0625
9
13 14 15 16
(d) Network cost for control
2 39
70
4 2 39
23
23 22
24
10 35
19
24
Network cost for control
25 26 27 28 29 30
70
(c) Packet drops (#) 4
(b)
24
19
0.438
0.5
25 26 27 28 29 30
0.5
Figure 38. Packet drops and network cost for control, for individual rooms. On the left, the packet drops, and on the right the corresponding network cost for control. Top: one access point. Bottom: two access points. Boundary between the nodes belonging to the two access points indicated. [P6]
This short example shows that the design of a WNCS is not straightforward. Issues related to the drawbacks of the wireless network need to be considered in the control design. The drawbacks can be compensated by selecting proper network protocols and re‐tuning the control system. The topology of the net‐ work is also worth considering. By simulating the system, one can make changes to the communication and control design and iterate before installa‐ tion. Thus, PiccSIM is a valuable simulation tool to test wireless control applica‐ tions. Table 5. Building Automation simulation results.
One access point
Two access points
Packet drop [%]
18
4.5
Network cost for control (82)
0.40
0.05
End‐to‐end delay [s]
0.14
0.075
Routing overhead [%]
2.5
0.3
Mean control cost (ISE)
0.037
0.023
101
(a )
24
Control cost
23 22
1
4
5
12
3
6
11
2
39
7
38
0
0.0187
8
37
0.0375
9
36
0.0562
10
35
0.075
34
13 14 15 16
17
18
33
32
31
0.0938
21 20
0.112
(b)
19
0.131
23 22
4
5
12
3
6
11
2
39
0
7
38
0.0187
8
37
0.0375
9
36
0.0562
10
35
34
0.075
13 14 15 16
17
18
33
32
31
0.0938
21 20
0.112
19
0.131
(c )
24
Control cost
23 22
1
4
5
12
3
6
11
2
39
0
7
38
0.0187
8
37
0.0375
0.0562
9
36
0.15
24
Control cost
1
25 26 27 28 29 30
10
35
0.075
34
13 14 15 16
17
18
33
32
31
0.0938
21 20
0.112
19
0.131
25 26 27 28 29 30
0.15
25 26 27 28 29 30
0.15
Figure 39. Control cost for building automation case. Without network (a), one access point (b), and with two access points (c). The control cost reflects the network cost for control shown in Figure 38.
4.7.4. Crane Control in an Industrial Hall This case considers wireless control of a trolley crane in an industrial hall. It emphasizes the real‐time requirements of wireless communication in wireless control applications. The operator gives the velocity reference for the crane with a wireless handheld device to the control system. The control messages are routed over a local wireless IEEE 802.15.4 network installed in the hall and on the crane, to the crane control system. [P6]
102
hor _pos Trolley position [m]
hor _pos_ref
Trolley_pos_reference
Horizontal reference hor _vel _ref
Trolley_pos
Horizontal reference 1 ver _pos_ref Vertical reference ver _vel _ref Vertical reference 1
Trolley_speed_refere
Trolley_speed_ref Trolley_speed_reference Load angle
Trolley operator
Operator
Hoist_pos_reference Hoist_pos
Hoist_speed_ref
Hoist_speed_reference
Trolley_torque Timestamps
Wireless network Node
Node
Terminator 1
Send to T 1 N 6
ID =T6 1 N 0 Data
ID = 0
Node send only
Node receive only
Trolley speed [m/s] Trolley control [V]
ver _pos
Trolley_speed
Local Crane model control Hoist position [m]
Cascade PID controller 2
Hoist Speed [m/s]
Hoist_speed_reference Hoist_torque
Hoist control [V]
Load angle [deg]
Hoist_speed
Cascade PID controller 1
Cascade PID controller 3 do { ... } while
Load angle Load angular speed [deg/s]
Trolley crane
Terminator
Synchronize with Ns -2
Visual feedback
Figure 40. Overview of Simulink model for wireless control of crane.
The laboratory scale crane model presented in [44] is scaled up by a factor of five and used in the simulation cases. The crane control system consists of PID controllers for the trolley and hoist motors, which operate the actuators based on the velocity reference given by the operator through the wireless handheld device. An overview of the Simulink model is shown in Figure 40. For simula‐ tion purposes the operator is represented by PID controllers for the vertical and horizontal movement of the load and one for stabilizing the load swing. The load of the crane is moved according to a predefined trajectory, given as refer‐ ence to the “operator controllers”. The controller tuning is selected such that good performance is obtained without packet drop. There are PID tuning rules for WNCSs that could be applied, but they assume simple process models and cannot be applied to the complex and nonlinear crane model. To assess the impact of network QoS on the control performance, simulations with different network QoS parameters are made. Several load movement tra‐ jectories are simulated with different Gilbert‐Elliott (10) network model para‐ meters. Examples of the resulting load angle swing are given in Figure 41 for different packet drop parameters. Significant increase in the oscillations is seen depending on the packet drop distribution. With a correlated packet drop, where the probability of packet drop is 95 % given that the previous packet is dropped, the fast oscillations are significantly larger compared to the uniform packet drop distribution, even when the mean drop probabilities are the same. The resulting control performances, each averaged over ten runs, is shown in Figure 42. The control cost, integral of squared error (31) for the load angle, is shown as a function of packet drop probability and network cost for control (82) in Figure 43. Considering only packet drop does not give a good indication of the resulting control performance, whereas the NCC correlates well with the control cost. This result is general, as similar results are obtained with a simple first‐order system in Section 3.5.2. There are naturally variations depending on the particular random packet drop realization.
103
Packet drop 0 %
8
Packet drop 30 %, uniformly distributed
8
6
6
4
4
4
2 0 -2 -4
2 0 -2 -4
-6
-6
-8 0
-8 0
10
20
30 40 50 Time [s]
60
70
80
Load angle [ °]
6 Load angle [ °]
Load angle [ °]
8
Packet drop 30 %, correlation 95 %
2 0 -2 -4 -6
10
20
30 40 50 Time [s]
60
70
80
-8 0
10
20
30 40 50 Time [s]
60
70
80
Figure 41. Crane load angle swing with different packet drop probabilities of the network. Left: no packet drop. Center: 30 % uniformly distributed packet drop. Right: 30 % packet drop with correlation of 95 %.
The radio environment of an industrial hall, where a similar crane is located is measured in Section 3.1. The packet drop range is 10 ‐ 50 % and the mean out‐ age length of consecutive dropped packets is 0.05 ‐ 0.5 seconds, shown in Figure 11, which are similar to the values used in the simulation results shown in Figure 42. From Figure 42, the conclusion that the control performance in a real environment is degraded about 200 ‐ 400 %, compared to the case of perfect control, can be made. There is thus room for improvement of the network to regain the wireless control performance and reliability compared to the wired system.
0.35
Control cost, ISE
0.3 0.25 0.2 0.15 0.1 0.05 0 0.5
0.4
0.4 0.3
0.2 Packet drop probability
0.2 0
0.1 0
Mean outage length [s]
Figure 42. Integral square error of load angle as a function of packet drop probability and mean bad state residence time.
104
Control cost, ISE
0.3
0.2
0.1
0 0.15
0.2
0.25
0.3
0.35 0.4 0.45 Paket drop probability
0.5
0.55
0.6
Control cost, ISE
0.4 0.3 0.2 0.1 0
0
10
20
30 40 Network cost for control
50
60
70
Figure 43. Integral square error of load angle as a function of packet drop probability and network cost for control (82). Linear mean square error fit added.
The crane model is based on a real laboratory scale trolley crane, which is used in the PiccSIM Toolchain demonstration in the following section. There the automatic code generation is shown for the compensation of the load angle swing.
4.7.5. PiccSIM Toolchain Demonstrations Two brief examples are given here to demonstrate the modeling, simulation and automatic code generation capabilities of the PiccSIM Toolchain (Sections 4.4 and 4.6). A laboratory scale trolley crane system with an ultrasound based measurement system to measure the swing of the load is used as a testbed [44]. The system includes a Kalman filter to estimate the load angle when the ultra‐ sound measurement system is unable to calculate it. Previously, the swing was compensated with a wired control system using a fuzzy logic controller. The Kalman filter and a simple anti‐swing controller are modeled with the generic node blocks of the PiccSIM Toolchain. The corresponding PiccSIM radio blocks are added to enable wireless communication between the nodes. The process to be controlled is modeled and attached to a generic node block func‐ tioning as an interface node to the trolley crane. The interface node samples the process with an analog input, and sends the measurement to the angle estima‐ tion node (the Kalman filter). The Kalman filter node estimates the current load
105
angle and angle velocity, and sends these values to the controller node. The controller is a PD controller, which uses the received estimates to calculate an appropriate control signal to compensate the load swing. Upon reception of the control value from the controller node, the interface node outputs it with the analog output to the trolley crane system. The sampling interval of the control system is 0.1 seconds, and the whole loop is traversed in two sampling intervals (because of time‐driven operation). The whole simulation model is shown in Figure 44. Radio trigger
Radio timestamp
Radio timestamp Radio send
Radio send
Radio recv
Radio recv
Pendulum _controller
Kalman _filter
Send to N0 T 3
Timestamps
Timestamps
Node
Node Send to N2 T 2 ID = 1
ID = 2 Data N 1 T 2
Data N 0 T 1
Send enable
Node_Controller
Node _ KF
Process Process AD 0
DA 6
Radio timestamp
DA 7 Stop bit
Radio recv
Radio send
Process_interface
do { ... } while Synchronize with Ns -2
Timestamps Node Send to N1 T 1 ID = 0
u Data N 2 T 3
Interface node
Figure 44. Simulink simulation model for load swing estimation and con‐ trol with wireless nodes. Green: blocks implemented on wireless nodes with automatic code generation. Gray: blocks used for communication. Red: model of the process, only used for simulation. Wireless communica‐ tion indicated with arrows.
106
The system is implemented with the PiccSIM Toolchain and the controller is tuned by simulation. When the results are approved, the interface, Kalman filter and controller blocks are converted into C‐code using the automatic code gen‐ eration feature of the Toolchain, and downloaded to the Sensinode wireless nodes. The interface node is connected to the trolley crane system for reading the load angle measurement and writing the trolley swing compensation con‐ trol signal. The whole system is run and the anti‐swing result is shown in Figure 46. [P3] Another example using automatic code generation is realizing the wireless controller of a heated airflow process “Process Trainer” PT326 by Feedback Ltd., which is part of the Automation and Systems Technology laboratory course. The control system consists of a wireless PID controller that controls the air temperature of the out‐flowing air, as shown in Figure 46. The same process has been tested also in a NCS setting with a control area network [159]. The process is first identified and modeled by using input/output data gathered with the wireless nodes. The transfer function of the process is identified as
50
1
0
-50 40
0
45
50 Time [s]
55
Anti-swing Control [Nm]
Load angle [°]
Load angle Anti-swing Control
-1 60
Figure 45. Anti‐swing test run with wireless nodes, angle of load and anti‐ swing control signal shown. Start of anti‐swing at 50 s.
107
Wireless Actuator
Wireless Sensor
Heated airflow process
Wireless Controller
Figure 46. Wireless control of heated airflow process.
60
60
50
40
40
20
Power [W]
Temperature [°C]
Temperature Reference Heating
30
175
180
185
190 Time [s]
195
200
Figure 47. Control result of wireless control of heated airflow process.
108
0
Gm ( s ) =
12.5925 e −0.18725 s , 2 0.0233s + 0.6049 s + 1
(86)
by minimizing the integral squared error between the step response of the process and the model. A PID controller is tuned with an ISE cost optimization based tuning and a jitter margin constraint using the tuning tool of the PiccSIM Toolchain. The satisfactory control result when controlling the actual process is shown in Figure 47. This demonstrates that a wireless control system designed in simulations can be automatically implemented on actual wireless nodes with the PiccSIM Tool‐ chain. The code is for instance efficient enough to run a two‐state Kalman filter, ten times a second, to estimate the load angle and angle velocity. Effort may be needed to connect the wireless nodes to the process (sensors and actuators), including making the interface circuitry to accommodate the respec‐ tive input and output voltages of the node and of the process. The voltage bias and range need to be calibrated to translate the voltages to temperature values. The computations for the conversion are implemented in the sensor and actua‐ tor nodes, as part of their program.
4.8. Summary In this chapter the developed communication and control co‐simulator PiccSIM was presented. The integration of ns‐2 and Simulink delivers a versatile tool to simulate and study aspects of WNCSs. Several tools are available in PiccSIM that enable the design of the network and controllers, and automatic code gen‐ eration for implementation. Both simulators are extended based on the special simulation requirements for WNCSs, such as packet drop models. The integra‐ tion of the simulators is fulfilled with simulation time‐synchronization, data exchange capabilities between simulators, enabling for instance controlled node mobility, and a Simulink blockset library for communication over the simulated network. With the graphical user interfaces and tools of PiccSIM, the develop‐ ment of WNCSs can be done all the way from design, simulation, to implemen‐ tation. The chapter is concluded by some simulation cases, where the capabilities of PiccSIM and the properties of WNCSs are highlighted. The simulation cases show that there are considerable interactions between the network and control, where the control performance depends significantly on the network QoS and specific behaviour of the network, such as showed in the crane control case. The network and protocol design determines the resulting communication perform‐ ance and further the control result. By simulations the network protocol suit‐ ability for real‐time control applications can be studied. The application deter‐ mines conversely the proper selection of the network protocols, depending on the application properties and requirements.
109
5. ADAPTIVE CONTROL IN WIRELESS NETWORKED CONTROL SYSTEMS In this chapter several novel network adaptive control algorithms are pre‐ sented. The different adaptive schemes [P8]‐[P11] are presented in separate subsections with the conclusions gained from the simulations at the end of each section. The first adaptation scheme is the adaptive jitter margin PID controller, which changes its tuning based on the observed delay jitter of the network [P8]. This is a simple scheme where, first the network induced delay jitter is measured, whereupon a suitable controller tuning is selected such that the control loop is stable with the given jitter. The tuning is then changed on‐line as the observed network statistics changes. The adaptive jitter margin controller adapts only itself according to the network characteristics. The adaptive control speed scheme of Section 5.2, tries to affect the network performance [P9]. Whereas the previous scheme only change to more conservative tuning in terms of the jitter margin and cannot prevent the network from congestion, this scheme changes the used network bandwidth such that it avoids congesting the network with the accompanying bad control performance. The previous two adaptive control schemes are both for plants consisting only of SISO control loops. The step adaptive controller presented in Section 5.3 is a decentralized control scheme for MIMO plants [P10]. Full MIMO control is not desired in WNCSs, because the resulting network traffic would be high. Instead several SISO control loops are formed. The interactions between the control loops are then handled by selecting appropriate tuning depending on the situa‐ tion. The appropriate tuning is explored in Section 5.3.2. In Section 5.4, the case of a longer network outage, when the jitter margin stabil‐ ity condition is exceeded, is considered. A heuristic scheme based on the IMC design to bring the process to a desired steady‐state during the outage is ex‐ amined [P11]. The control action during an outage is based on the fact that the controller is tuned such that the closed‐loop system behaves as a first‐order system.
111
5.1. Adaptive Jitter Margin PID Control In this section the adaptive jitter margin (AJM) controller that adapts to the delay jitter or packet loss of the network, is presented. Deploying a control system in a real‐world application usually demands simple configuration or self‐configuration. In WNCSs this is even more important as the network per‐ formance will change with time, depending on interference, moving machinery, routing, or installation of new wireless devices. Adaptive controllers are needed in these cases, or if the network performance is not known exactly in advance. It is not practical to re‐tune the whole control system, after every change in the network or the environment. Adaptation brings robustness to the control system in the case of changed network parameters, as the changes are compensated by automatic controller re‐tuning. This motivates the develop‐ ment of adaptive network aware controllers. Changes in the network might affect the control performance, which should adapt to the new conditions. These changes can stem from obstacles or interfe‐ rence from other devices, which may change the route in a multihop system. The tuning of the AJM controller is automatically selected such that (re‐)config‐ uration is not needed when a WNCS is deployed, new devices are added, or the network topology is changed. A similar approach is in [116], where gain sche‐ duling depending on the number of hops of the communication of a state‐ feedback controller is used. In the building automation case presented in Section 4.7.3, the QoS delivered to the different rooms depended on the location in the network and the distance to the access point. This implies that the control loops should be tuned individual‐ ly according to the observed network QoS. Therefore, the adaptive jitter margin PID controller is developed, such that every loop will obtain a suitable tuning based on the experienced network properties and no laborious network analy‐ sis and subsequent tuning is needed. The adaptive jitter margin PID controller principle is to tune a PID controller as tightly as possible without endangering the stability because of varying delay or packet drops. This is accomplished by observing or estimating the delay jitter, and tuning the controller according to the maximum current estimated jitter margin δmax ( k ) with a jitter margin tuning method [47]. The tuning rules (27)‐(28) in Section 2.6.1 are used. [P8] Two alternative methods for estimating the delay jitter δmax ( k ) are developed. The first is based on counting the timestamps and the gaps between the re‐ ceived packets, which is simple and exact (Section 2.4.1), but relies on certain assumptions. The other is based on probabilistic estimation using a Kalman filter, which is more complex and has less restricting assumptions. The delay estimation is made on the maximum a posteriori probability of delay, given the
112
current estimated process output and the received measurement. It takes into account the uncertainty of the estimate and probability of the delay. [P7] In practice, the jitter margin for the controller tuning should be at least the ob‐ served delay jitter with a one sampling interval margin, because if an additional packet is dropped it will increase the necessary jitter margin by h, for the con‐ trol‐loop to be stable. The jitter margin used for controller tuning is thus at least h according to
{
}
′ ( k ) = arg max D ( k ) + h δmax d
(87)
The tuning of the AJM‐PID controller is then updated with this delay jitter estimate at every time‐step. Two simulations with delay jitter induced by a network are performed. In Sec‐ tion 5.1.1 a Simulink only model with a specified packet drop probability is used, and in Section 5.1.2 PiccSIM is used, where the packet drop and delay are simulated with ns‐2 for more realistic results. Both the simple and the advanced delay jitter estimation techniques are compared. In both cases the sensors, con‐ trollers and actuators are time‐driven, with a sampling interval of h. The process to be controlled is in both cases (26) with K = 1, T = 5 s, τ = 0 s, and the minimum communication delay of LN = h . The output has added white‐noise with variance R = 0.012.
5.1.1. Delay Jitter Estimation Simulations A simulation with random packet drop is performed in Simulink to compare a constant gain PID with the AJM‐PID controller. The constant discrete‐time PID controller with sampling interval h = 1 s is tuned for the maximum jitter margin of δmax = 5 s by (88), and the jitter adaptive PID to the current estimated jitter. Both jitter estimation algorithms are evaluated with the AJM‐PID controller. A tightness factor of α = 1 is used. A random packet drop with probability pdrop = 0.3 and a maximum of six con‐ secutive packet drops is implemented with a Markov‐chain with six states (9), resulting in a jitter margin of dmax = 6h seconds. In the simulations pdrop is a li‐ nearly increasing function (in this case from 0 to 0.3) of time. The maximum delay in the simulations is ( 1 + 5 ) h seconds, with the overall maximum delay jitter of 5h seconds (see Figure 49). The average delay distribution when T pdrop = 0.3 is about π ( d ) ≈ 10 −2 ⎣⎡70 21 6.3 2.0 0.57 0.17 ⎦⎤ [P7], which is used in the KF based delay estimation algorithm for the whole simulation. A time‐window (6) for the jitter margin calculations of TW = 60 seconds is used. The KF process noise is chosen as Q = 0.00012. The accuracy of the KF‐based delay estimator depends on the change in the output Δy of the process. To ob‐ tain good delay estimates, only estimates based on data exceeding a validation
113
threshold tvalid are selected. The probability of a wrong delay estimate as a func‐ tion of the output change Δy for the simulated process is shown in Figure 48 for different true delays [P7]. The probability of a wrong estimate is large for im‐ probable (large) delays, but at about an output change of Δy = 0.03 the proba‐ bility is smaller than 0.2 in any case. Thus a threshold tvalid = 0.03 for the valid delay jitter estimates is selected. Simulations of 200 step responses with a frequency of f = 1/50 1/s, are done. The performances of the methods are evaluated by a jitter margin estimation cost and a control cost. The jitter margin estimation cost is given as the sum of the absolute jitter estimation error Jδ ,est =
1 N −1 ceil ( 5k / N ) − δmax ( k ) ∑ N k =0 h
(89)
where ceil ( 5 k / N ) approximates the “maximum possible” jitter margin and δmax ( k ) is the estimated jitter, N = 200 / h ⋅ 2 / f . The control costs are the IAE (29) and ISE (31) averaged over all the step responses. The true delay and maximum delay estimate of both algorithms are plotted in Figure 49. Both algorithms estimate the delay jitter properly. The KF based algorithm overestimates the maximum delay in the beginning of the run, be‐ cause it uses the wrong delay distribution (the one in the end of the run).
Probability of wrong delay estimate
1 0.8 0.6
6 5 4 3 2
0.4 0.2 0 0
1
0.01
0.02 0.03 y change, Δy
0.04
0.05
Figure 48. Probability of wrong delay estimate as a function of change in output for different values (1‐6) of true delay [P7].
114
The performance costs are compared in Table 6, including constant controllers with tunings of δmax = h and δmax = h , corresponding to the tuning for the min‐ imum respective maximum delay jitter is given. The case of a simulation with no packet drop and constant jitter margin tuning with δmax = h is also given (the case with no network and thus maximum achievable performance). Both jitter margin estimation algorithms result in lower costs than the constant alterna‐ tives, but naturally also in a higher cost than with perfect communication and without packet drops. Selecting a tuning for the minimum delay is worse than for the maximum delay, although the average delay jitter is closer and jitter margin cost is lower in this case. This implies that it is better to overestimate the delay jitter, as the opposite case will degrade the control performance and en‐ danger the stability. The KF based algorithm performs better and has slightly lower costs compared to the simple algorithm, because of better delay jitter estimates. Table 6. Jitter margin estimation and control simulation results.
Jitter margin method
Jitter margin cost, Jδ ,est
Control cost , Control cost, JIAE JISE
Assume min, δmax = 1
1.97
12.5
6.4
Assume max, δmax = 5 2.5
10.7
5.97
Simple estimation
0.69
6.22
3.57
Advanced estimation
0.66
6.08
3.37
No network, δmax = 1
‐
5.32
2.90
6
5
6 Actual delay Maximum estimated delay Maximum possible delay
5
4
4
3
3
2
2
1
1
0 0
1000 2000 3000 4000 5000 6000 7000 8000 9000 10000 Time [s]
0 0
Actual delay Maximum estimated delay Maximum possible delay
1000 2000 3000 4000 5000 6000 7000 8000 9000 10000 Time [s]
Figure 49. Delay and estimated maximum delay for simple (left) and KF based algorithm (right).
115
5.1.2. Adaptive Control Tuning Scenario Simulations The PiccSIM simulated scenario considers a distributed plant with 25 wirelessly measured processes arranged in a 5x5 grid and centralized control. The distance between the sensors are set such that the radio can only communicate with the neighboring nodes (eight square neighbors). Because of fading there will be packet losses. Additionally, there are extra delay and packet loss because of MAC queuing and collisions. Thus the hop count and the network performance depend on the distance from the central control, located in a corner of the grid, such that the farthest nodes should get the worst quality of service. The dynamic model of the process is (26), multiplied to 25 SISO loops. PID controllers with the tuning (27) are applied. A tightness factor of α = 0.9 is used, because of the additional unreliability of the wireless network. The worst case packet drop is in this case determined by the simulated network properties and the traffic rates of this simulation case. Based on the simulation results, an aver‐ age control loop experiences packet drops with a probability of about pdrop = 0.1 . This value is used in the packet drop probability model (9) of the KF based jitter estimation algorithm. Since the network is simulated, the drops are in reality not uniformly distributed, nor uncorrelated. The network performance results are given in Table 7 and the control results in Table 8. It can be noted that the network communication delay is much smaller than the sampling interval h of the control loop. This is typical, and motivates the assumptions of the simple delay jitter estimation method. The control costs for each individual loop, displayed in Figure 50, visualize the differences between the tuning alternatives. Constant tuning assuming the maximum delay jitter has the highest costs, and is equal to using no network. This implies that the tuning is in fact robust to packet drops. The simple delay estimation works satisfactorily and has the lowest costs. In this case the restrict‐ ing assumptions of the method are fulfilled, but if the assumptions of the sim‐ ple delay estimation method are not fulfilled the advanced method may per‐ form better. The advanced delay estimation method tends to have a larger cost deviation between the control loops, because of the uncertainty in the estima‐ tion. The larger costs are because of delay jitter overestimates, which result in a conservative control, so the stability is not endangered. Figure 51 displays the packet drop and network communication delay as a function of the distance between the sensor and the controller. Indeed, they increase with increasing distance. Thus, the network quality of service expe‐ rienced by the control loops is different, which suggests the need for individual (adaptive) tuning of the loops in a WNCS.
116
Table 7. Network results of a run with PiccSIM.
Average communica‐ tion delay [s]
Communication Packet deli‐ delay std [s] very [%]
Routing over‐ head [packets]
0.031
0.014
216
93
4
Cost, ISE
3.5 3 2.5 2 1.5 Assume max jitter
Simple estimation Advanced estimation
No network
Figure 50. Scatter plot of each control loop costs (average JISE ) for different delay jitter estimation methods.
Table 8. Control results of a run with PiccSIM.
Control cost JISE
JISE Std
Assume maximum delay jitter δmax = 5
3.31
0.078
Simple estimation
2.03
0.27
Advanced estimation
2.89
0.28
No network
3.42
0.079
117
40
0.1 Transmission delay
20
0 10
0.05
20
30 40 50 Distance from controller [m]
60
Transmission delay [s]
Packet drop [#]
Packet drop
0 70
Figure 51. Scatter plot of measurement packet drop and communication de‐ lay of each control loop as a function of distance from sensor to controller.
The average control costs in Table 8 indicate that both jitter margin estimation methods give better control performance than the constant tuning case. The standard deviations of the cost between the different control loops are, howev‐ er, larger (see also Figure 50). This is mainly because the control loops expe‐ rience different network performance (depending on the distance to the control‐ ler), and some loops can be tuned tighter than others, giving a lower cost. A similar phenomenon was observed when the control was tuned too conserva‐ tive or too aggressive compared to the network QoS in Figure 18.
5.1.3. Summary The adaptive tuning of PID controllers in wireless control systems is the first of the developed adaptive control methods in this thesis. The tuning is based on the estimated delay jitter caused by the network. Two jitter estimation algo‐ rithms are compared, where one is simple, but has the constraining assump‐ tions that only packet losses are present and packets have timestamps. The other algorithm is more general and is applicable to any network delay, even with an unknown delay. The adaptive jitter margin PID is tuned based on the estimated delay jitter, such that the control loop is stable for all observed delay jitters. The adaptation algo‐
118
rithm is suitable for both step‐wise, shown with Simulink simulations, and slow changes, PiccSIM simulations, in the network conditions and delay jitters. The AJM‐PID controller is compared in simulations and shown to perform better than a constant gain controller tuned for the minimum or maximum delay jitter. The AJM‐PID is further tested with the PiccSIM simulator in a mul‐ tihop scenario. The simulation results show that the network quality of service depends on the number of hops of the control loop communication. With the AJM scheme, the individual control loops are tuned independently online ac‐ cording to the network performance. This demonstrates the advantages of net‐ work aware adaptive controllers in a WNCS case: easy deployment, automatic tuning based on observed network quality of service and reaction to changing conditions. The overall performance is better than tuning for the worst case.
5.2. Adaptive Control Speed Based on Network Quality of Service As discussed in Section 2.8, in a network with a CSMA type MAC, the network QoS depends on the amount of traffic in the network. If the network becomes congested, the QoS decreases drastically. On the other hand, the control system performance improves with decreasing sampling interval, which implies more traffic. The target of the WNCS is thus to do cross‐layer optimization to select a suitable sampling interval, where network and control performances are good [92], [96]. The aim of the adaptive control speed (ACS) algorithm [P9] is to develop a distributed algorithm for adaptively selecting sampling intervals and control speed in a networked control system, based on a network related QoS measure. Control speed refers here to the speed of the step response or rise time of the control loop. Increasing the control speed, must be accompanied by an increase in sampling rate. In case the network cannot deliver the required QoS, the control speed is reduced, yielding slower and more robust control. Reducing the sampling rate results in lower congestion of the network and hence a better QoS. This trade‐ off has previously been demonstrated in [128], where the sampling interval is changed according to a PI controller with a desired packet drop of 5 %. The various control loops may have different requirements in terms of sam‐ pling interval, because of different time‐constants of the controlled processes. This diversity of experienced QoS and different requirements must be coordi‐ nated, to enable a working WNCS. Instead of using a fixed controller tuning, it is changed according to network congestion and, as a consequence, the sam‐ pling rate is changed. This is the opposite approach than most of the other con‐
119
trol adaptation mechanisms in the literature, where only the sampling interval is changed. The framework for the adjustable control algorithm is that of the internal model control paradigm described in Section 2.7, because the control or step response speed can be nicely described with one parameter, λ, in the continuous‐time case (Section 3.4.1). This is transferred to a discrete‐time controller, since the process measurements are transmitted in discrete packets over the network. The update algorithm for the control speed λ is in discrete‐time and is de‐ scribed in the next section. In a WNCS, the network performance experienced by the controller depends among other things on the location in the network of the control loop and the traffic, generated by the other control loops on the communication path. In the following simulations it is assumed that the network congestion correlates with the packet drop rate of the network. The presented ACS algorithm tries to converge to a suitable control speed for all the control loops in a NCS, such that a user specified QoS level is achieved. This is accomplished by adjusting the control speed λ, and indirectly the sam‐ pling interval h, which affects the network traffic and QoS. The λ based control design is applicable to stable processes, controlled using measurement transmit‐ ted over a network, either wired or wireless. No admission control is applied here. It is assumed that the network is designed such that sufficient bandwidth is available for the control application. In the following subsections the adaptive control speed algorithm is described and some analysis is performed. The required internal model control prelimi‐ naries are given in Section 2.7. The simulation in Section 5.2.4 demonstrates a WNCS case, using the PiccSIM simulator.
5.2.1. The Adaptive Control Speed Scheme The adaptive control speed algorithm adapts the λ parameter of an IMC tuned controller, depending on the network QoS. The user selects a desired network QoS level rd, for which the controller is stable and performs well, and ACS tries to maintain a suitable control speed and traffic rate to meet the goal. The QoS measure should depend on the amount of traffic in the network. In this work the packet drop rate is used as the criterion. The algorithm can natu‐ rally be modified to adapt the control speed according to any other network based QoS measure, for example the network delay, the packet drop QoS measure (82), or any other network congestion related measure. If packet drops are used, a drop is detected by observing a gap in the sequence number of the received packets. The measured QoS is then
120
⎧⎪rmeas ( k ) = 1, if packet dropped ⎨ ⎪⎩rmeas ( k ) = 0, otherwise.
(90)
For practical application, the instantaneous packet drop is low‐pass filtered to obtain the average packet drop QoS measure, r. The filtered drop rate is then r ( k + 1) = βr ( k ) + ( 1 − β ) rmeas ( k )
(91)
where 0 ≤ β < 1 is a filter constant and rmeas is the measured QoS. The total QoS rtot , used to evaluate the simulations, is calculated as a weighted sum of the QoS of the individual loops, weighted by their share of the traffic, the reciprocal of the sampling interval h
∑r h = ∑1 h i
rtot
i
i
.
(92)
i
i
The algorithm for adapting λ is developed on the following desired properties: The control speed is changed proportional to the dominating process time‐ constant, and proportional to the time passed since the last update (the update interval is the same as the sampling interval). This has the effect that all the control loops will be adjusted relative to the natural process speed. The adapta‐ tion step‐size is proportional to the error between the actual and desired net‐ work QoS. If any loop experiences worse QoS than desired, all loops will reduce their traffic. Because of the exponential relationship between λ and γ (44), it is more natural to adapt the linear (with respect to the response speed) λ and then calculate the corresponding γ for the discrete‐time controller. These considerations and the analysis in Section 5.2.3 lead to the following ACS update algorithm: λ ( k + 1) = λ ( k ) + ch ( k ) Δr ( k ) m ( k ) ,
(93)
where c > 0 is an update step scaling factor and h(k) is the current sampling interval, and m(k) is the update speed. Δr ( k ) determines the size and direction, or velocity, of the update according to the following rules ⎧⎪max ( ri ) − rd Δr ( k ) = ⎨ i ⎪⎩ r ( k ) − rd
any ri > rd otherwise
(94)
where ri is the QoS of the ith loop and rd is the desired QoS. If any loop expe‐ riences worse QoS than desired, all loops use max ( r )i − rd . This decreases the i traffic generated by all loops to obtain a better QoS for the loop that has too low QoS. This global adjustment is used because bad QoS is usually due to the other control loops taking too much of the available bandwidth. Otherwise the loops
121
adjust λ according to the local QoS. Moreover, the update speed depends on Δr ( k ) such that ⎧⎪λ / T , if Δr ( k ) ≤ 0 m(k) = ⎨ ⎪⎩T / λ , if Δr ( k ) > 0
(95)
where T is the time constant of the process. The update speed thus depends on how much the control speed λ differs from the natural speed of the process T. In case of a process model of higher order, the dominating time‐constant is used for T in the adaptation algorithm. At every time‐step the control speed λ is updated and the corresponding IMC controller is calculated. The sampling interval is updated according to (74). The sampling interval of the sensor and controller is thus proportional to the control speed. The sampling interval is additionally quantized to a multiple of two of a base sampling interval hbase such that ⎛ ⎛ λ(k) h λ ( k ) = hbase 2 p , where p = floor ⎜ log 2 ⎜ ⎜N h ⎜ ⎝ h base ⎝
(
)
⎞⎞ ⎟⎟ ⎟⎟ ⎠⎠
(96)
where floor rounds down to the nearest integer. Quantization is used for prac‐ tical reasons, because the controller cannot change the sampling interval conti‐ nuously. The procedure for the change is described in the next subsection. According to (75), a suitable jitter margin for the ACS scheme can be selected directly by specifying Nh. The jitter margin in terms of consecutive packet drops is thus the same regardless of the control speed. The actual jitter margin accord‐ ing to (22), with IMC‐PID control and the parameters given in the simulation case described in Section 5.2.4 (T = 10), is solved previously in this thesis and plotted as a function of control speed in Figure 15. Without quantization the obtained jitter margin is as specified at Nh = 8, but quantization alters the jitter margin.
5.2.2. Changing the Sampling Interval The algorithm for changing the sampling interval starts by a change in the cal‐ culated quantized h(k) (96) at the controller. The process model is first re‐ discretized and then the new IMC controller is calculated. Changing the sam‐ pling interval in the middle of a run requires some calculation to make a seam‐ less transition [3]. The decision to use quantized sampling intervals in the ACS algorithm simplifies the transition calculations and avoids changing sampling intervals continuously. Changing to a longer sampling interval is easy, as it is in this case doubled: the new samples are calculated as averages over pairs of previous control in‐ put/output values and the new controller is switched on immediately. When
122
halving the sampling interval, the controller needs to be initialized with in‐ between samples. There exist several applicable methods to change the sam‐ pling online without bumps. One can use interpolation with splines or optimi‐ zation to find the in‐between values [3]. Here, an algorithm that matches the output of the old and new controllers is proposed [P9]. The change to a shorter sampling interval is sketched in Figure 52. The sensor is first informed of the new sampling interval, and it starts transmitting with it. The old controller is still run during the initialization phase, using every other measurement of the new sampling interval. Once enough samples with the faster sampling rate are received, the initialization is done according to the following algorithm and the new controller is applied. The switch is done at the time‐instant k = ks, with indexing according to the new, faster sampling rate. As the slow‐sampling controller has been used, every other control value is matched such that the same output response is achieved, i.e. “u(k)” of the old controller must equal “u(2k)” of the new. The in‐between u‐ values (indicated in Figure 53) are solved using the controller equation D ( z ) u ( ks − m ) = N ( z ) y ( ks − m ) ,
(97)
where Gc ( z ) = N ( z ) D ( z ) . The values u ( ks − even ) are fixed by the old control‐ ler and u ( ks − uneven ) are unknown (even = 0, 2, 4,… and uneven = 1, 3, 5,…). The “uneven” values u ( ks − uneven ) are found, by solving x from the linear equation Ax = b , using the fixed even values (Figure 53), where
Figure 52. Proposed method to switch to a shorter sampling interval, with deg Gc = 5 . Control signal and instants for process measurements shown.
( )
123
D ( 1) u ( k s ) + D ( 2 ) u ( k s − 1 ) + D ( 3 ) u ( k s − 2 ) +
= N ( 1) y ( k s ) + N ( 2 ) y ( k s − 1) +
D ( 1) u ( k s − 1) + D ( 2 ) u ( k s − 2 ) + D ( 3 ) u ( k s − 3 ) +
= N ( 1) y ( k s − 1) + N ( 2 ) y ( k s − 2 ) +
D ( 1) u ( k s − 2 ) + D ( 2 ) u ( k s − 3 ) + D ( 3 ) u ( k s − 4 ) +
= N ( 1) y ( k s − 2 ) + N ( 2 ) y ( k s − 3 ) +
D ( 1) u ( k s − 3 ) + D ( 2 ) u ( k s − 4 ) + D ( 3 ) u ( k s − 5 ) +
= N ( 1) y ( k s − 3 ) + N ( 2 ) y ( k s − 4 ) +
Figure 53. Unknown u‐values to be solved indicated by box when switch‐ ing from slower to faster sampling.
x = ⎡⎣u ( ks − 1) u ( ks − 2 − 1)
(
u ( ks − 2 M + 1) ⎤⎦ ,
(
))
A row 2m and 2 m + 1, columns m to m + ceil deg ( D ) / 2 =
⎡D ( 2) D ( 4) =⎢ ⎣⎢ D ( 1) D ( 3 )
D ( even ) ⎤ ⎥ D ( odd ) ⎦⎥
where deg ( D ) > 2 is the order of the polynomial D(z), D(n) is the term of the nth power of D, ceil rounds up to the nearest integer, and b ( rows 2 m and 2 m + 1) =
⎡ N ( z ) y ( ks − m ) − D ( 1 + even ) u ( ks − 2 m ) ⎤ =⎢ ⎥ ⎢⎣ N ( z ) y ( ks − m − 1) − D ( 1 + uneven ) u ( ks − 2m − 1) ⎥⎦
where one‐based indexing is used for the elements of D, m = ⎡⎣0 … M / 2 ⎦⎤ , and M = deg ( D ) − 2 . If deg ( D ) ≤ 2 , no solving needs to be done, the new controller can continue immediately using every other previously received value. An example of changing the sampling interval is given in Figure 54, for both increasing and decreasing the sampling interval. With the initialization calcula‐ tion presented above, the control continues smoothly after the switch.
5.2.3. Analysis of the Adaptive Control Speed Algorithm In this section the ACS algorithm is shown to be of additive increase, multiplic‐ ative decrease‐type (AIMD), which is a typical approach for bandwidth control of network traffic. AIMD is for instance used in TCP. The evolution of λ is analyzed by combining (93) with (95) and using (96), neg‐ lecting the rounding by using h ( k ) = λ ( k ) / N h instead. When Δr > 0, (93) be‐ comes
λ ( k + 1) = λ ( k ) + and when Δr ≤ 0
124
cT Δr ( k ) Nh
(98)
3 yr u y
2
1
0
0
5
10
15 Time [s]
20
25
30
3 yr u y
2
1
0
0
5
10
15 Time [s]
20
25
30
Figure 54. Switching of sampling interval. Top: slow to fast (h = 1 s to 0.5 s). Bottom: fast to slow (h = 0.5 s to 1 s), at time t = 8 s. Controller switches to fast sampling at t = 10 s (top) because of required initialization. Control sig‐ nal u and process response y plotted.
⎛ ⎞ c λ ( k + 1) = λ ( k ) ⎜ 1 + Δr ( k ) ⎟ . ⎝ TN h ⎠
(99)
Equations (98) and (99) show that λ is increased additively when it is too small and decreased multiplicatively, when it is too large. Thus the ACS is an AIMD type algorithm. The additive and multiplicative constants are proportional to the error from the desired QoS, Δr ( k ) . The main difference between this algo‐ rithm and any TCP algorithm, is that this adjusts the control speed, where the actual traffic amount on the application layer is adjusted, instead of adjusting the traffic speed on the transport layer. Now the stability of the ACS scheme is analyzed. The general stability of an AIMD type rate control algorithm is difficult to prove. An early analysis is by [30]. One can consider several cases, such as one [14] or several bottleneck links [105], [75]. Below is a very simplistic proof, for the case with one bottleneck and instantaneous packet drop feedback and no queue overflows. Consider a sys‐
125
tem with several control loops all governed by the ACS scheme. If the desired QoS is reached, then Δr = 0 and the control speed update (98), (99) remains constant, λ ( k + 1) = λ ( k ) . As there exists an equilibrium, we next assess what happens when the ACS is not at steady‐state. Assume that the network QoS is a function of the traffic over a bottleneck link ⎛ 1 ⎞ r (k) = f ⎜ ∑ ⎟= ⎜ i h (k) ⎟ i ⎝ ⎠
⎛ Nh ⎞ f ⎜∑ ⎟ , ⎜ i λ (k) ⎟ i ⎠ ⎝
(100)
where the sum is the packet frequency over that link, summed over all the con‐ trol loops with sampling intervals hi. As traffic increases, f gives the QoS cost r as a positive increasing function of the traffic (and ultimately the control speed). Hence, the network QoS cost increases in some (non‐linear) manner if the traffic over the network increases, which is a typical behavior of networks with a CSMA type MAC. If Δr > 0, (98) implies λ ( k + 1) > λ ( k ) , and since f is positive and increasing f ( k + 1) < f ( k ) and Δr ( k + 1) < Δr ( k ) . Similar reasoning when Δr ≤ 0 gives in (99) λ ( k + 1) < λ ( k ) and f ( k + 1) > f ( k ) , which leads to Δr ( k + 1) > Δr ( k ) . The reasoning is the same for all the control loops, as they all measure the same r, thus Δr ( k ) is always decreasing until Δr approaches zero. In practice this may never happen, as packet drops are randomly distributed and all loops do not observe exactly the same QoS. The following simulations indicate that the ACS is still well behaving. As with any similar learning algorithm, the choice of c determines the rate of convergence of the algorithm. Selecting a small value makes convergence slow, but a too large value may cause oscillation around the optimum.
5.2.4. Simulation Scenario The simulation scenario consists of six control loops using ACS. Measurements of the controlled processes are transmitted wirelessly over an IEEE 802.15.4 network. The network topology is shown in Figure 55, where all control loops communicate over one bottleneck in the center of the network. The distances are such that the radio signal reaches only the nearest neighbors, thus multihop communication is used. AODV [126] is used as the routing protocol. A simula‐ tion for 6000 seconds is done, where loops 5 and 6 are initially idle and start operation at times t = 2000 s and t = 4000 s, to show how the ACS algorithm reacts when traffic is suddenly increased. The process models in the loops are continuous‐time, first order transfer func‐ tions with unit gain and time‐constants as indicated in Figure 55. All the processes have a delay of τ = 0.5 seconds. A PID controller, with the IMC‐PID tuning without a pre‐filter, described in Section 2.7.2 is used.
126
4 1 Sensor 2 Sensor 3 Controller 3 Sensor 1 3 5 T = 30 s Controller 5 T = 20 s 7 6 8 9 Controller 4 Sensor 4 Controller 6 T = 40 s 9 T = 30 s 9 Sensor 5 Sensor 6 2 0 Controller 2 Controller 1 T = 20 s T = 10 s
Figure 55. Network topology in simulated scenario, consisting of six wire‐ less control loops. Possible communication routes are indicated.
The selected parameters for the ACS algorithm are the following: the packet drop low‐pass filter coefficient is β = 0.98 and the update speed is c = 2. The desired packet drop is rd = 4 %. The base sampling interval is set sufficiently low at hbase = 0.01 s and Nd = 8. Changing the sampling interval in practice commences by the controller send‐ ing a packet to the sensor, instructing it to use the new interval. The measure‐ ment packets from the sensor contain the used sampling interval, such that the controller knows when the sensor has successfully switched to the new sam‐ pling interval. If no change is done, the controller repeats the request. Another practical issue is the individual QoS needed by (94). The loops must obtain this information from the other loops. Sharing this information is done with the so called send‐on‐delta approach to minimize the used bandwidth. The send‐on‐delta mechanism means that the loop notify the other loops by sending a packet of its current local QoS ri, if it is above rd and has changed more than a certain threshold since the previous update. Additionally, the nodes send a packet when the QoS returns to the desired region. The results of one of several runs are shown in the following figures. Figure 56 shows the average packet drop of the individual loops where the bold line is the total QoS (92), which is mostly kept below the desired level of 4 %. The control speeds and corresponding sampling intervals for all the loops are shown in Figure 57. Initially all the loops decrease the sampling intervals, until packets start to drop. When loops 5 and 6 starts, congestion occurs and all the loops slow down to accommodate for the increased congestion introduced by the additional loops. Notice how the new loops find an appropriate control speed, even though they initially start with a conservative control speed. The ACS thus compensates for the changing traffic conditions.
127
0.1 0.09 0.08 0.07
QoS
0.06 0.05 0.04 0.03 0.02 0.01 0
0
1000
2000
3000 Time [s]
4000
5000
6000
Figure 56. Observed average packet drop for all individual loops and total QoS rtot (92) (black line). Desired QoS drawn with dotted line. 40
λ
30 20 10 0
0
1000
2000
3000 Time [s]
4000
5000
6000
0
1000
2000
3000 Time [s]
4000
5000
6000
6
h [s]
4
2
0
Figure 57. Top: control speed for all the control loops, evolution as a func‐ tion of time. Bottom: Corresponding sampling intervals.
128
From the experience of the simulations, which include all the network related issues of the media access and routing protocols, one can conclude that the ACS algorithm works as intended. The assumption that the packet drop depends directly on the congestion of the network turns out, in practice, not to hold completely. Packet drop mostly depends on the precise timing of the network, examples are collisions when two nodes transmit simultaneously, or in the case of a queue overflow. This is more probable when the network is congested, but is in nature stochastic. ACS could instead use network congestion information to adjust the control speed. Congestion feedback information, either implicitly through random early drop [51] or explicitly by messaging from the interme‐ diate nodes [136], could be used to accomplish the rate control.
5.2.5. Summary The adaptive control speed algorithm for NCSs changes the tuning λ of an IMC controller depending on the network QoS. The measurement sampling rate is changed as a function of λ, which adjusts the traffic of the network such that it is not congested. If the network is congested the control speeds and sampling rates of all the control loops are reduced, to compensate. The algorithm is unique in the sense, that it adjusts the controller generated traffic in a NCS setting, depending on the offered network QoS. It is a control oriented approach to adapt to a network layer problem. The sampling interval adaptation can as well be applied to sensor network type of monitoring applications where the importance of the measurement is specified by the parameter T. The proper change of sampling interval is considered here, whereas in most works found in the literature the old controller is continued to be used with a new sampling interval and the whole issue is ignored. The adaptive IMC based controller handles online change of the sampling rate without bumps, by an initialization procedure. The presented ACS algorithm is demonstrated with PiccSIM, where six control loops using ACS are simulated. The control speeds are adjusted online as more loops are added to the network, such that the desired QoS is maintained.
5.3. Step Adaptive Controller for Networked MIMO Control Systems In this section the multiple‐input multiple‐output WNCS case is considered. A decentralized wireless 2x2 MIMO control system is depicted in Figure 58. A MIMO process, with wireless sensors measuring all the outputs and separate controllers for the inputs, i.e. diagonal MIMO control, is assumed.
129
y1
Controller 1
y r1
Process
Control
u1
G11(s)
Network Sensor 1
G12(s ) G21(s )
y r2
Controller 2
u2 G22(s )
Sensor 2
y2
Figure 58. Diagram of a 2x2 MIMO process in a NCS.
Fully decentralized MIMO control results in high network traffic because the information from every sensor is needed for every control input. Due to the communication requirements of full MIMO control, diagonal MIMO control, where separate SISO loops control the MIMO process as shown in Figure 58, is more suitable for WNCSs and thus considered here. The lightweight requirement, due to the low communication capabilities of wireless nodes, demands restricting the algorithms to simple types of control‐ lers, such as PID or IMC controllers. Although the achievable performance with several SISO PID or IMC controllers controlling each input‐output pair may not be as good as with a full MIMO controller, the decomposition is justified in a WNCS, because of the low and local communication needs compared to the full MIMO case. When carefully tuned, the structural simplicity of the individual controllers may outrun the difference in performance of the more complex MIMO controller in a WNCS setting. Thus, the need for good diagonal MIMO PID controller tuning is obvious, of which there are plenty to choose from [145]. Here, a controller tuning switching method is proposed, such that good control is achieved, depending on in which input a step change in the reference is made [P10]. In the multivariable control case, the objectives of the controllers are to produce a feasible step response in one loop and an efficient cross‐interaction elimina‐ tion in all the other loops. The idea of the step adaptive controller (SAC) is simi‐ lar to cascade control, where the disturbance would be suppressed by creating a plain speed difference between the loops. In other words, the controller of the loop which performs a step would correspond to the primary controller of the cascade control, with a lower loop speed (equivalent to a larger IMC λ value). At the same time, the other loop would be tuned faster (smaller λ), and thus more efficient at compensating for the cross‐interaction disturbance. [P10]
130
The step adaptive controller thus switches the tuning depending on whether the loop has a change in its own reference or not. If a step response is expected, the tuning is changed in order to ensure a good response. Conversely, if a set point change is made in another interacting loop, a tuning more suitable for cross‐interaction rejection is selected. If there are concurrent reference changes, the latter strategy is selected. The design of the step adaptive controller is naturally done by using the IMC framework (Section 2.7), where the controller can be tuned with only one tun‐ ing parameter related to the speed of the step response. The tuning can be ap‐ plied on a conventional IMC controller or on an IMC‐PID controller, which are considered here, with some design alternatives summarized in Figure 59. The SAC framework, which changes the controller tuning depending on the situa‐ tion, is not restricted to IMC control with the notion of control speed, but can be applied through optimization to any parameterized controller. An example used here is optimizing the parameters of a PID controller. The controller tuning is chosen by optimization, thus, the envisioned speed difference may not necessarily come true. By changing the cost criterion, the operator can choose an acceptable step response. The selection of the cost crite‐ rion is investigated in the next section. Although the step adaptive controller is applied here to a 2x2 process, it can be extended to an n x n MIMO case as well. In the n x n case, the proposed proce‐ dure would yield n‐1 tuning parameter values for every controller, optimized for eliminating the cross‐interaction that originate from the n‐1 other loops. This large amount of different tuning values (n x n‐1) needed for cross‐interaction elimination could be reduced by first analyzing the interactions between the loops and then optimizing only for the loop that causes the largest interaction. The chosen tuning would then be suitable for the other, less significant cross‐ interactions from other loops. Design discrete‐time IMC
Design IMC‐PID
Discrete‐time PID
Discrete‐time PID
Optimize γ
Optimize λ
Optimize Kp , Ki, and Kd
n times for separate steps in all the loops
Figure 59. Step adaptive controller tuning alternatives.
131
5.3.1. Controller Tuning by Optimization for MIMO Systems To tune the step adaptive controller, the simulation based tuning procedure [129] is applied. In the MIMO case, unit reference changes are for example made sequentially to each input of the system. Hence, each output response is composed of two different situations where the control performance is assessed: step response and cross‐interaction. These two cases are different in nature, and may set competing requirements for the control actions. Therefore the cost criterion for the tuning optimization is considered in this subsection. A suitable cost criterion is selected to fit the desired control objectives in both of the above‐ mentioned situations for the SAC. In order to evaluate the control performance of a MIMO system a new cost criterion is proposed. The total cost, which is minimized for optimal control tuning, is chosen as a weighted sum of two individual costs: the costs during the step and cross‐interaction response. The ITSE criterion (32) yields good step responses because of the absolute time included in the cost calculation, which discounts the initial step transient and emphasizes the settling down to a steady‐state. The ISE criterion in (31) is more suitable for evaluating the cost under load disturbances, which can occur at any time. Therefore, the cost crite‐ rion is switched from ITSE to ISE at tload when the character of the response changes from step response to cross‐interaction, at the time in another loop has a step response, as shown in Figure 60. A weighted sum of the two cost functions is taken, similarly as in [53]. The weight factor α (0