3 Mobile Robotic Systems 3.1 Introduction Mobile Robotics is an active research area where researchers from all over the world find new technologies to improve mobile robots intelligence and areas of application. Today robots navigate autonomously in office environments as well as outdoors. They show their ability to beside mechanical and electronic barriers in building mobile platforms, perceiving the environment and deciding on how to act in a given situation are crucial problems. Mobile Robotic Systems are and will be used for new application areas. The range of potential applications for mobile robots is enormous. It includes agricultural robotics applications, routine material transport in factories, warehouses, office buildings and hospitals, indoor and outdoor security patrols, inventory verification, hazardous material handling, hazardous site cleanup, underwater applications, and numerous military applications. Global competition and the tendency to reduce production cost and increase efficiency creates new applications for robots that stationary robots cannot perform. These new applications require the robots to move and perform certain activities at the same time. The availability and low cost of faster processors, better programming, and the use of new hardware allow robot designers to build more accurate, faster, and even safer robots. Currently, mobile robots are expanding outside the confines of buildings and into rugged terrain, as well as familiar environments like schools and city streets (Wong 2005). 3.2 Autonomous Mobile Robotic Systems Vehicles that can perform desired tasks in unstructured environments without continuous human guidance are called autonomous mobile robots. Many kinds of robots are autonomous to some degree, including teleoperated robots. Different robots can be autonomous in different ways, but generally a high degree of autonomy is particularly desirable in fields such as in dynamically changing environments, space or cave explorations; where communication, delays and interruptions are unavoidable. 58 CHAPTER 3: MOBILE ROBOTIC SYSTEMS This concept of autonomy is not only related to mobile robots, some modern factory robots manipulators are autonomous within the strict confines of their direct environment. Initially, factory robots have not been subject to continuous human guidance or even work necessarily under any human guidance at all. One important area of robotics research is to enable the robot to cope with its environment whether this is on land, underwater, in the air, underground or in space (Toibero 2007). An ideally fully autonomous robot in the real world should have the ability to: Gain information about the environment Work for months or years without human intervention Travel from point A to point B, without human navigation assistance Avoid situations that are harmful to people, property or itself Repair itself without outside assistance. One of the ultimate goals in robotics is to create autonomous robots. Such robots must accept high-level descriptions of tasks and will execute them without further human intervention. The input descriptions will specify what the user wants to be done rather than how to do it (Latombe 1991). It would however take some time before something close to the goals of Latombe appears. The early robots primary used for manufacturing, i.e., welding, painting and so-called pick and place operations were used in environments where very few unexpected events occurred and where exact repeatability of actions was the main measure of excellence. Figure 3.1: Autonomous Mobile Robot Pioneer 3AT The biggest obstacle in the design of robots for other areas of application is uncertainty. Starting from the premise that coping with uncertainty is the DYNAMIC CONTROL OF MOBILE ROBOTIC SYSTEMS 59 most crucial problem a mobile robot must face, it can be concluded that the robot must have the following basic capabilities: Sensory Interpretation. The robot must be able to determine its relationship to the environment by sensing. A wide variety of sensing technologies are available: contact, odometry, ultrasonic, infrared and laser range sensing; monocular cameras and stereo vision have all been explored. The difficulty is in interpreting these data, i.e., in deciding what the sensor signals tell about the external world. The trend to attack this problem is by sensor fusion, i.e., by combining the outputs of multiple feature detectors possibly operating on a variety of sensors or simply multiple observations of the same object. Much of this work has focused on applications of Kalman filtering, which essentially provides a mechanism for weighting the various pieces of data based on estimates of their reliability: e.g., the sonar present some difficulties due to the reflections on the objects, and laser rangefinders have difficulties when sensing dark objects due to the absorption, but both can be used together in order to allow the object detection in critical situations. Reasoning. The robot must be able to decide what actions are required to achieve its goals in a given environment. This may involve decisions ranging from the selection of the paths to take, to what sensors and controller use. This is done usually in a higher level, namely the supervisor, which takes into consideration all the control system available information, and depends strongly on the employed control system architecture. As expected, in areas where humans move about, positions of things are subject to change all the time. Furthermore, the variety of obstacles and objects the robot will encounter is very large. In such circumstances, both sensing and control become much more complex. Here appears the importance of the control architecture in order to increase the autonomy of the robotic platform. This item will be discussed with more detail in the next sections. 3.2.1 Types of Mobile Robots Many different types of mobile robotic systems had been developed depending on the kind of application, velocity, and the type of environment whether its water, space, terrain with fixed or moving obstacles. Four major categories had been identified (Dudek & Jenkin 2000): 60 CHAPTER 3: MOBILE ROBOTIC SYSTEMS Terrestrial or Ground-contact Robotic Systems. There are three main types of ground-contact robots: wheeled robots, tracked vehicles, and limbed (legged) vehicles. Wheeled robots exploit friction or ground contact to enable the robot to move. Different kinds of wheeled robots exist: the differential drive robot, synchronous drive robot, steered wheels robots and Ackerman steering (car drive) robots, the tricycle, bogey, and bicycle drive robots, and robots with complex or compound or omnidirectional wheels. Figure 3.2: Pioneer Robots (ActivMedia Inc.) Tracked vehicles are robust to any terrain environment, their construction are similar to the differential drive robot but the two differential wheels are extended into treads which provide a large contact area and enable the robot to navigate through a wide range of terrain. Figure 3.3: Tracked Robots – COMET Group (Oklahoma State University) DYNAMIC CONTROL OF MOBILE ROBOTIC SYSTEMS 61 Limbed vehicles are suitable in rough terrains such as those found in forests, near natural or man-made disasters, or in planetary exploration, where ground contact support is not available for the entire path of motion. Limbed vehicles are characterized by the design and the number of legs, the minimum number of legs needed for a robot to move is one, to be supported a robot need at least three legs, and four legs are needed for a statically stable robot, six, eight, and twelve legs robots exists. For example of a limbed robot, ASIMO which is a humanoid two legs walking robot developed by Honda and features the ability to pursue key tasks in a reallife environment such as an office (Honda 2008). Figure 3.4: ASIMO Robot (Honda) Aquatic Robotic Systems. Aquatic vehicles support propulsion by utilizing the surrounding water. There are two common structures: torpedolike structures (Feruson & Pope 1995; Kloske et al. 1993) where a single propeller provides forward, and reverse thrust while the navigation direction is controlled by the control surfaces, the buoyancy of the vessel controls the depth. The disadvantage of this type is poor maneuverability. Figure 3.5: Madeleine Underwater Robot (NSF's Collaborative Research at Undergraduate Institutions) 62 CHAPTER 3: MOBILE ROBOTIC SYSTEMS Flying Robotic Systems. First, fixed-wing autonomous vehicles utilize control systems very similar to the ones found in commercial autopilots. Ground station can provide remote commands if needed, and with the help of the Global Positioning System (GPS) the location of the vehicle can be determined. Automated helicopters use onboard computation and sensing and ground control, their control is very difficult compared to the fixed-wing autonomous vehicles. Figure 3.6: CL327 Guardian Helicopter (AHS) Buoyant (aerobots, aerovehicles, or blimps) vehicles can float and are characterized by having high energy efficiency ration, long-range travel and duty cycle, vertical mobility, and they usually has no disastrous results in case of failure. Unpowered autonomous flying vehicles reach their desired destination by utilizing gravity, GPS, and other sensors. Figure 3.7: Bell Eagle Eye - Military Unmanned Aerial Vehicle (UAV) DYNAMIC CONTROL OF MOBILE ROBOTIC SYSTEMS 63 Space Robotic Systems. These are needed for applications related to space stations like construction, repair, and maintenance. Free-flying systems have been proposed where the spacecraft is equipped with thrusters with one or more manipulators, the thrusters are utilized to modify the robot trajectory (Dudek & Jenkin 2000). Figure 3.8: Mars Exploration Rover Mission – Spirit Robot (NASA 2004) 3.2.2 Kinematic Modeling Kinematic models of mobile robots are used into the design of controllers when the vehicle develops tasks or missions with low speed and load. Mobile robots have quite simple mathematical models to describe their instantaneous motion capabilities. However, this only holds for single mobile robots only, because the modeling does become complex as soon as one begins to add trailers to mobile robots. Airport luggage carts are a fine example of such mobile robot trains. Real-world implementations of car-like or differentially-driven mobile robots have three or four wheels, because the robot needs at least three noncollinear support points in order not to fall over. However, the kinematics of the moving robots are most often described by simpler equivalent robot models. Holonomic Constraint. A constraint that restricts the system motion to a smooth hyper surface in the configuration space is called a holonomic constraint. 64 CHAPTER 3: MOBILE ROBOTIC SYSTEMS Pfaffian Constraint. Given a system with configuration variables q, a constraint of the following type: A q q 0 (3.1) where A(q) mxn and q n is called a Pfaffian constraint. Unicycle-like Mobile Robot. A unicycle mobile robot is a driving robot that can rotate freely around its axis. The term unicycle is often used in robotics and control theory to mean a generalized cart or car moving in a two-dimensional world; these are also often called "unicycle-like" or "unicycle-type" vehicles. These theoretical vehicles are typically shown as having two parallel driven wheels, one mounted on each side of their centre, and (presumably) some sort of offset castor to maintain balance; although in general they could be any vehicle capable of simultaneous arbitrary rotation and translation. An alternative realization uses a single driven wheel with steering, and a pair of idler wheels to give balance and allow a steering torque to be applied. A physically realizable unicycle, in this sense, is a nonholonomic system. This is a system in which a return to the original internal (wheel) configuration does not guarantee return to the original system (unicycle) position. In other words, the system outcome is path-dependent. Nonholonomic constraint. The common characteristic of mobile robots is that they cannot autonomously produce a velocity which is transversal to the axle of their wheels. A differentially-driven robot has one such constraint (the caster wheels are mounted on a swivel and hence give no constraint, except for friction); bicycles and cars have two constraints: one on the front wheel axle and one on the rear wheel axle. These constraints are nonholonomic constraints on the velocity of the robots (Latombe 1991), i.e., they cannot be integrated to give a constraint on the robots Cartesian pose (the word ―holonomic‖ is built from the Greek words holos ―integral‖ and nomos - ―law‖). In other words, the vehicle cannot move transversally instantaneously, but it can reach any position and orientation by moving backward and forward while turning appropriately. Parking your car is a typical example of this maneuver phenomenon. The nonholonomic constraints reduce the mobile robot instantaneous velocity degrees of freedom DYNAMIC CONTROL OF MOBILE ROBOTIC SYSTEMS 65 A Pfaffian constraint that is not equivalent to a holonomic constraint is called a Nonholonomic constraint. Nonholonomic Kinematic Model. The kinematics of a nonholonomic mobile robot can be modeled by the equation (3.2) x u cos y u sin (3.2) where u and are the control inputs: the linear and the angular velocity, respectively. The robot state variables are x, y and ; where (x, y) are the coordinates of the middle point between the driving wheels and denotes the heading of the vehicle relative to the x-axis of the world coordinate system. The vector [x y ]T defines the posture of the vehicle. A rear wheel turns freely and balances the rear end of the robot above the ground. It is assumed a non-slip condition on the wheels, so the robot cannot move sideways. This is the nonholonomic constraint of the unicycle robot. Figure 3.9 shows a diagram of a unicycle-like mobile robot. u (x, y) y x Figure 3.9: Unicycle-like nonholonomic mobile robot diagram Holonomic Kinematic Model. The kinematics of a holonomic mobile robot can be modeled by (3.3) x u cos a sin y u sin a cos (3.3) 66 CHAPTER 3: MOBILE ROBOTIC SYSTEMS If the unicycle-like mobile robot position is defined by a point, which is located in front of the wheels axis center (see Fig. 3.10); then, the holonomic model of the mobile robot is obtained. u (x, y) a y x Figure 3.10: Unicycle-like holonomic mobile robot diagram The holonomic model does not have velocity restrictions on the plane, that is, the point (x, y) is able to move in any direction. 3.2.3 Dynamic Modeling Decoupled dynamic model. The decoupled dynamics of the unicycle-like mobile robot is given by two differential equations: one as regards the translational movement and other as regards the rotational movement. u Tu u K gu v T K gv (3.4) (3.5) where, v vr vl , v vr vl are voltages applied to the right and left engines of the mobile robot, respectively; v+ and v- are the common and differential voltages; Tu and T are constants of time; and Kgu and Kg are the relevant gains of each differential equation. This model considers that the center of mass of the mobile robot is located in the wheel baseline center. The decoupled dynamic model of the unicycle-like mobile robot is valid provided that the center of mass is located on the wheel baseline center. If this condition is not fulfilled, it is necessary using a coupled dynamic model. DYNAMIC CONTROL OF MOBILE ROBOTIC SYSTEMS 67 Coupled dynamic model. With the coupled dynamic model the translational movement is affected by the rotational movement and, vice versa, that is, the rotational movement is affected by the translational movement. The dynamic model introduced by De la Cruz (2006) is illustrated in Fig. 3.11, where h = (x, y) is the point that is required to tracks a trajectory; G is the center of mass; B is the wheel baseline center; u and ū are the longitudinal and lateral velocities of the center of mass; and are the angular velocity and heading of the robot; d, b, a, e and c are distances; Frrx’ and Frry’ are the longitudinal and lateral tire forces of the right wheel; Frlx’ and Frly’ are the longitudinal and lateral tire forces of the left wheel; Fcx’ and Fcy’ are the longitudinal and lateral force exerted on C by the castor; Fex’ and Fey’ are the longitudinal and lateral force exerted on E by the tool; and e is the moment exerted by the tool. The force and moment equations for the robot are (Boyden & Velinsky 1994): (3.5) F m u u F F F F (3.6) Fy ' m u u Frly ' Frry ' Fey ' Fcy ' d M I 2 F F b F F e b F c b F (3.7) x' z z rlx ' rrx ' rlx ' rly ' rrx ' rry ' ex ' cx ' ey ' cy ' e where m is the robot mass; and Iz is the robot moment of inertia about la vertical axis located in G. The kinematics of point h is: x u cos u sin a b sin y u sin u cos a b cos (3.8) (3.9) According to Zhang et al. (1998), velocities u, and ū, including the slip speeds, are: 1 r r l urs uls 2 1 r r l urs uls d b u r r l urs uls u s d u (3.10) (3.11) (3.12) 68 CHAPTER 3: MOBILE ROBOTIC SYSTEMS where r is the right and left wheel radius; r and l are the angular velocities of the right and left wheels; urs and uls are the longitudinal slip speeds of the right and left wheel, and u s is the lateral slip speed of the wheels. Frlx’ x’ u Frly’ u Fey E e y’ b G B Frrx’ e a h Fcy C Fex Fcx Frry’ c y x d Figure 3.11: Coupled dynamic model of a mobile robot The motor models attained by neglecting the voltage on the inductances are: r l ka vr kb r Ra ka vl kb l Ra (3.13) (3.14) where vr and vl are the input voltages applied to the right and left motors; kb is equal to the voltage constant multiplied by the gear ratio; Ra is the electric resistance constant; r and l are the right and left motor torques multiplied by the gear ratio; and ka is the torque constant multiplied by the gear ratio. The dynamic equations of the motor-wheels are: I e r Be r r Frrx ' Rt I e l Be l l Frlx ' Rt (3.15) (3.16) DYNAMIC CONTROL OF MOBILE ROBOTIC SYSTEMS 69 where Ie and Be are the moment of inertia and the viscous friction coefficient of the combined motor rotor, gearbox, and wheel, and Rt is the nominal radius of the tire. In general, most market-available robots have low level PID velocity controllers to track input reference velocities and do not allow the motor voltage to be driven directly. Therefore, it is useful to express the mobile robot model in a suitable way by considering rotational and translational reference velocities as control signals. For this purpose, the velocity controllers are included into the model. To simplify the model, a PD velocity controller has been considered which is described by the following equations: vu k PT uref ume k DT ume v k PR ref me kDR me (3.17) where 1 r r l 2 1 me r r l d ume Variables uref and ref are neglected in (3.17) to further simplify the model. From (3.10 – 3.17) the following dynamic model of the mobile robot is obtained: u cos a sin 0 x u sin a cos 0 y 0 3 2 4 1 u 1 1 1 u 5 6 u 0 2 2 The parameters of the dynamic model are: 0 0 x 0 y uref 0 0 ref u 1 2 (3.18) 70 CHAPTER 3: MOBILE ROBOTIC SYSTEMS Ra mRt r 2 I e 2rkDT ka 1 2rk PT Ra I e d 2 2 Rt r I z mb 2 2rdk DR k 2 a 2rdk PR Ra mbRt ka 3 2k PT 4 6 Ra ka kb Be ka Ra rk PT Ra mbRt ka 5 dk PR 1 Ra ka kb Be d ka Ra 2rk PR The elements of the uncertainty vector x y 1 0 u T are: x u s sin y u s cos u I R rka k DT s Rt Ra s mu s Fex ' Fcx ' 4 urs uls e a ur ul 21k PT ka 21 2 rk k 1 PT a I R d 2rka k DR s 6 5 s Rt Ra s urs uls e a u eFey ' cFcy ' e ur ul 2 d 2 2 dk PR ka 22 rk PR ka d The uncertainty vector in (3.18) will not be considered, if the slip speed of the wheels, the forces and moments exerted by the tool, and the forces exerted by the castor are of no significant value. Accelerations u and do not depend on the states x, y and ; then these variables can be expressed as follows: DYNAMIC CONTROL OF MOBILE ROBOTIC SYSTEMS 71 1 3 2 4 u uref 1 1 u 1 u 6 1 5 u ref 2 2 2 By re-arranging -and disregarding the uncertainty vector- the linear parameterization is attained: u 0 2 0 0 u 0 0 uref θ 0 u ref where θ 1 2 3 4 5 6 ; with an identification method, the vector θ can be easily identified. T 3.3 Navigation of Mobile Robotic Systems As for many robot tasks, mobility is an important issue, robots have to navigate their environments in a safe and reasonable way. Navigation describes, in the field of mobile robotics, techniques that allow a robot to use information it has gathered about the environment to reach goals that are given a priory or derived from a higher level task description in an effective and efficient way. The main question of navigation is how to get from where we are to where we want to be. Researchers work on that question since the early days of mobile robotics and have developed many solutions to the problem considering different robot environments. Those include indoor environments, as well is in much larger scale outdoor environments and under water navigation. Beside the question of global navigation, how to get from A to B navigation in mobile robotics has local aspects. Depending on the architecture of a mobile robot (differential drive, car like, submarine, plain, etc.) the robot possible actions are constrained not only by the robots environment but by its dynamics. Robot motion planning takes these dynamics into account to choose feasible actions and thus ensure a safe motion. 72 CHAPTER 3: MOBILE ROBOTIC SYSTEMS 3.3.1 Navigation Systems A navigation system is the method for guiding a vehicle. Several capabilities are needed for autonomous navigation (Alhaj Ali 2003): 1. The ability to execute elementary goal achieving actions such as going to a given location or following a leader; 2. The ability to react to unexpected events in real time such as avoiding a suddenly appearing obstacle; 3. The ability to formulate a map of the environment. Odometry. This method, like other dead-reckoning methods, uses encoders to measure wheel rotation and/or steering orientation (Ojeda 2003). There are several approaches to reduce the odometry errors in mobile robots, e.g. for over-constrained mobile robots, three novel error-reducing methods are mentioned: One method, called ―Fewest Pulses‖ method, makes use of the observation that most terrain irregularities, as well as wheel slip, result in an erroneous over-count of encoder pulses. A second method, called ―Cross-coupled Control,‖ optimizes the motor control algorithm of the robot to reduce synchronization errors that would otherwise result in wheel slip with conventional controllers. A third method is based on so-called ―Expert Rules‖, which readings from redundant encoders are compared and utilized in different ways, according to predefined rules. Sensor based navigation: Sensor based navigation systems that rely on sonar or laser scanners that provide one dimensional distance profiles have been used for collision and obstacle avoidance. A general adaptable control structure is also required. The mobile robot must make decisions on its navigation tactics; decide which information to use to modify its position, which path to follow around obstacles, when stopping is the safest alternative, and which direction to proceed when no path is given. In addition, sensors information can be used for constructing maps of the environment for short term reactive planning and long term environmental learning (Alhaj Ali 2003). Vision based navigation. Computer vision and image sequence techniques were proposed for obstacle detection and avoidance for autonomous land vehicles that can navigate in an outdoor road environment. The object shape boundary is first extracted from the image, after the translation from the vehicle location in the current cycle to that in the next cycle, the position of the object shape in the image of the next cycle is predicted, then DYNAMIC CONTROL OF MOBILE ROBOTIC SYSTEMS 73 it is matched with the extracted shape of the object in the image of the next cycle to decide whether the object is an obstacle (Chen & Tsai 2000). Inertial navigation. This method uses gyroscopes and sometimes accelerometers to measure the rate of rotation and acceleration. Active beacon navigation systems. This method computes the absolute position of the robot from measuring the direction of incidence of three or more actively transmitted beacons. The transmitters, usually using light or radio frequencies must be located at known sites in the environment (Premvuti & Wang 1996; Alhaj Ali 2003). Landmark navigation. In this method distinctive artificial landmarks are placed at known locations in the environment to be detected even under adverse environmental conditions. Map-based positioning. In this method information acquired from the robot onboard sensors is compared to a map or world model of the environment. The vehicle absolute location can be estimated if features from the sensor-based map and the world model map match. Biological navigation. Biologically-inspired approaches were utilized in the development of intelligent adaptive systems; biomimetic systems provide a real world test of biological navigation behaviours besides making new navigation mechanisms available for indoor robots. Global positioning system (GPS). This system provides specially coded satellite signals that can be processed in a GPS receiver, enabling it to compute position, velocity, and time. 3.3.2 Controllers for Mobile Robots Robots have complex nonlinear dynamics that make their accurate and robust control difficult. On the other hand, they fall in the class of Lagrangian dynamical systems, so that they have several extremely nice physical properties that make their control straight forwarded (Lewis et al. 1999). Different controllers had been developed for the motion of robot manipulators, however, not until recently where there has been an interest in moving the robot itself, not only its manipulators. From a mathematical point of view, the not slipping condition of the nonomnidirectional wheeled robots results in a kinematic constraint 74 CHAPTER 3: MOBILE ROBOTIC SYSTEMS represented by a differential relationship. This kind of constraint cannot be integrated and, in the nonholonomic systems, it is not possible to choose generalized coordinates equal to the number of degrees of freedom. That is, the number of generalized (e.g., Lagrangian) coordinates exceeds the number of degrees of freedom by the number of independent, nonintegrable constraints. Hence, nonholonomic systems present an adjunct complexity for the control system because these systems cannot be stabilized at a point by smooth feedback (De Luca & Oriolo 1995). Thus, the design of posture stabilization laws for nonholonomic systems has to face a serious structural obstruction. As a consequence, opposite to the usual situation, tracking is easier than regulation for a nonholonomic vehicle. However, several alternative approaches have been proposed for regulation of nonholonomic systems. The simplest approach to designing feedback controllers for nonholonomic systems is probably to decompose the control in two stages: first, find an open-loop strategy that can achieve any desired reconfiguration for the particular system under consideration. Second, transform the motion sequence into a succession of equilibrium manifolds, which are then stabilized by feedback. The overall resulting feedback is necessarily discontinuous, because of the switching of the target manifolds. Each stabilization problem in the succession should be completed in finite time (that is, not just asymptotically), so as to have a well-defined procedure. In order to achieve such convergence behavior, discontinuous feedback is used within each stabilizing phase. The weakness of this approach is that it requires the ability to devise an open-loop strategy for the system. Moreover, any perturbation occurring on a variable that is not directly controlled during the current phase will result in a final error. As a result, feedback robustness is achieved only with respect to perturbation of the initial conditions. Another approach is to use a time-varying controller. The idea of allowing the feedback law to depend explicitly on time is due to Samson (1993), who presented smooth stabilization schemes for the car-like kinematic models. When considering the point-stabilization of a time-invariant nonholonomic system, the introduction of a time-varying component in the control law may lead to a smoothly stabilizable system. However, these time-varying control laws have typically slow rates of convergence and a difficult tuning of the various parameters of the controller. An experimental validation of these kinds of controllers can be found in Kim & Tsiotras (2002). DYNAMIC CONTROL OF MOBILE ROBOTIC SYSTEMS 75 Hybrid strategies for the stabilization of the unicycle have been proposed in Pomet et al. (1992) and Toibero (2007). Namely, the first work, combining the advantages of smooth static feedback far from the target and timevarying feedback close to the target. The second research introduces asymptotically stable switched control systems with redundant controllers that allow maintaining the robot navigating in the center of a corridor and solving the parking problem for mobile robots. Next, some control strategies for mobile robots will be introduced. Position Control. The position control implies controlling the position and the orientation of the mobile robot. This aim is not an easy task, because the limitation given by Brockett (Stern 2002). In the conventional position control algorithm of the mobile robots, the output of the system, including the position and orientation sensing data, are feedback together to the system input through a feedback loop. Trajectory Tracking Control. The aim of the trajectory tracking control is to achieve the robot reaches and follows with zero-error desired states timevariant. These desired states describe desired trajectories. One way to perform a trajectory tracking control is by means of a virtual robot (Canudas de Wit et al. 1997). The virtual robot model (see Fig. 3.12) is given by xd ud cos d yd ud sin d d d (3.19) x cos sen 0 xd x y sen cos 0 y y d 0 0 1 d (3.20) Control errors are defined by where x , y and are the control errors. The matrix in (3.20) has inverse, so that, when x 0 , y 0 and 0 , then x xd , y yd and d . 76 CHAPTER 3: MOBILE ROBOTIC SYSTEMS ud Virtual Robot d d Real Robot (x, y) (xd, yd) u y x Figure 3.12: Real and virtual mobile robots for trajectory tracking control 3.3.3 Collision Avoidance The basic techniques for a mobile robot to move toward to the destination are localization, path planning, and control. The localization technique is required to know where the robot is about (Borenstein et al. 1996). It is mainly dependant upon accuracy of sensors and sensor fusion algorithms. The path planning technique includes collision avoidance and optimal trajectory design (Xu et al. 2002; Khatib 1986). The control technique is how the robot follows the prespecified trajectory with minimal tracking errors (Rosales et al. 2008). Those techniques are must-have methods for navigation of an autonomous mobile robot. For indoor application of a mobile robot, localization can be done by constructing an environmental map around the robot from sensors. The map can be used to determine collision avoidance with the wall and objects in the path. Constructing a map works quite well for a robot to find the path not colliding with static objects such as wall. For moving objects such as human beings, however, it is very difficult for a robot to draw dynamic maps that generate the collision avoidance path and to maintain a desired distance between the robot and the object. The robot requires prompt dynamic reaction to avoid collision with moving objects rather than to rely on static maps. Current sensor technology available to obstacle avoidance includes GPS, ladar (laser detection and ranging) in both two and three dimensions, sonar, microwave radar and CCD cameras. In addition to the features specific to DYNAMIC CONTROL OF MOBILE ROBOTIC SYSTEMS 77 each of these sensor technologies, the factors should be considered when choosing a sensor setup for a platform include cost, computational complexity of processing, response time, field of vision, resolution, range of detection, operation in two/three dimensional plane and the effect of adverse weather conditions. y Obstacle x Robot Figure 3.13: Collision Avoidance by using the Ladar Technique Most research to date in obstacle avoidance has made use of ultrasonic sensors or sonar due to its low cost and simplicity of use. Unfortunately these advantages are far outweighed by a very small range of detection (3m at 40kHz), a poor angular accuracy (+/-5º) and total failure in adverse weather conditions such as strong winds or platforms operating at high speeds (Myers & Vlacic 2005). Furthermore, problems with crosstalk can cause a sonar sensor with a 60ms response operating in a ring configuration can have a response time of the entire ring of 300ms which is unacceptable for real-time operation. These problems render the algorithms created specifically for this type of sensor useless for all but the most trivial applications in which the platform operates in an indoor static environment at low speeds. Another sensor to gain popularity more recently has been the ladar sensor. Its large range of detection, fast response time, and low complexity of processing make it ideal for outdoor high speed applications such as autonomous driving. Its main drawback is that in adverse weather conditions rain, snow, and dirt can be perceived as false objects. Also different types of sensors are being continually developed. A good compromise at this research stage would be to use data fusion to combine information from a ladar sensor with short-wave radar, which is 78 CHAPTER 3: MOBILE ROBOTIC SYSTEMS not susceptible to adverse weather conditions, thereby creating a failsafe sensor setup. The only drawback to this configuration is the current high cost of the short-wave radar technology. The separation of obstacle avoidance into different methods is a very controversial subject in which there are many differing opinions. Most agree that there are two approaches, global and local (though not always with these titles). This however is where the agreement ends and due to different authors differing definitions it is common to find methods classified as global by one author and local by another. Global Methods. These methods operate in a static environment by computing off-line an optimized path from start to finish that avoids all known static obstacles. This approach cannot deal with incomplete or inaccurate information or a time-varying environment and the complexity of this approach means that re-planning is too computationally expensive. Local methods. These methods use only a small fraction of the world space and operate in real-time in a dynamic, time-varying environment. They have the disadvantage of not being able to produce optimal solution and can get trapped in local minima (such as a large U shaped obstacle). Using this definition of local methods there are two distinct types of methods that fit this category: local path planners and reactive methods. Local path planning methods map out the entire path (made under the assumption of static obstacles) and then make adjustments while following the planned path. The most well-known local path planning method is the potential field approach which has been implemented using a wide variety of different methods including a method to remove local minima using harmonic functions. Reactive methods make angular and translation velocity commands based upon information processed from current sensory data. Current Reactive methods include the curvature velocity method, the Dynamic Window approach (Fox et al. 1997), Velocity Obstacles approach (Nak et al. 1998), Vector Field Histograms (Borenstein & Koren 1991), Polar Object Chart method, and Fuzzy Logic. The combination of local and global methods into an integrated system is called a Hybrid method. These methods are designed to combine the advantages of both methods and remove the disadvantages of each operating singularly. Most hybrid methods operate by using a global path DYNAMIC CONTROL OF MOBILE ROBOTIC SYSTEMS 79 planner to provide sub-points along an optimized path that are then used as goal points for a local method. Indoor mobile robots should perform goal-directed tasks in cramped and unknown environments. Both global path planning and local reactive obstacle avoidance algorithms must be implemented in order to make a mobile robot with this capability. While a global path planning algorithm calculates optimal path to a specified goal, a reactive local obstacle avoidance module takes into account the unknown and changing characteristics of the environment based on the local sensory information. 3.4 Multi-Robot Systems In the last years, Multi-Robot Systems are receiving an increased attention in the scientific community. In part, this is due to the fact that many problems related to the control of a single robot have been – at least partially – solved, and researchers start to look at the massive introduction of mobile systems in real-world domains. In this perspective, multi-robot systems are an obvious choice for all those applications which implicitly take benefit of redundancy; i.e., applications in which, even in absence of a coordination strategy, having more robots working in parallel improves the system’s fault tolerance, reduces the time required to execute tasks, guarantees a higher service availability and a quicker response to user requests. When taking into account more complex scenarios, multi-robot systems are no more an option; consider, for example, a team of robots trying to achieve an objective which cannot be attained by a single robot, or maintaining a constant spatial relationship between each other in order to execute a task more effectively. One of the first issues to be faced is whether coordination should be formalized and solved through a centralized control process, which computes outputs to be fed to actuators to perform a coordinated motion of all the components of the system, or distributed, with each member of the team taking decisions in autonomy on the basis of its own sensorial inputs, its internal state and – if available – information exchanged with other robots. 80 CHAPTER 3: MOBILE ROBOTIC SYSTEMS Figure 3.12: Multi-robot Cooperative System The task can be considered as the aim of the multi-robot system, thus, it changes depending on the different applications and on the typologies of multi-robot system. The task of the system is usually decomposed in elementary sub-tasks (task decomposition) easier to understand and control. These sub-tasks can be distributed among multiple resources (task allocation), while the overall behavior of the system depends on how these sub-tasks are recombined to obtain the final action of the system. The mechanism of cooperation represents the logic that originates the cooperation and it may depend on the control architectures and strategies, on aspects of the tasks specification or on the interaction dynamics among the behaviors. Thus, the multi-robot system has to exhibit a collective behavior or a set of actions that accomplishes the same behavior that was required for the single more complex robot. To exhibit this cooperative intelligent behavior, the members of the multi-robot system have to communicate directly through an explicit communication channel or indirectly through one robot sensing the others. The system performance can be represented through characteristics like, e.g., execution time of the mission, computational complexity, robustness and fault tolerance, and it may depend on the global structure of the system, e.g., typologies of the system, control architectures and strategies, task definition and actuation, communication characteristics. Thus, largely different kinds of control architectures for multi-robot systems have been presented in literature; however, the main distinction can be done between centralized and decentralized systems. DYNAMIC CONTROL OF MOBILE ROBOTIC SYSTEMS 81 3.4.1 Centralized Control Whereas central approaches often lead to an optimal solution, at least in a statistical sense, distributed approaches usually lack the necessary information to optimally solve planning and control problems. They are nevertheless more efficient to deal with real-word, non-structured scenarios: since decisions are taken locally, the system is still able to work when communication is not available, and – in general – allow a quicker, real-time response and a higher fault-tolerance in a dynamically changing environment. In centralized systems, a core unit collects and manages information about the environment to coordinate and control the motion of the robots and to guarantee the correct achievement of the mission. In such approaches, the core unit plays a fundamental role because it manages the whole system, i.e., it has to coordinate the information received by the distributed sensors or to manage global information of the environment, to take all the eventual decisions and to communicate with all the robots of the team; thus, it should be powerful enough to satisfy all the technological requirement. Most teams in RoboCup share these considerations, even if counterexamples exist (Weigel et al. 2002). Notice also that, a part from optimality, univocally accepted metrics or criteria to compare the performance of centralized and distributed multi-robot systems are missing; the problem is raised, for example, in (Schneider, 2005). 3.4.2 Decentralized Control The dual problem is cooperative sensing, in which robots share their perceptual information in order to build a more complete and reliable representation of the environment and the system state. The most notable example is collaborative localization and mapping (CLAM, see Madhavan et al. 2004), which has been recently formalized in the same statistical framework as simultaneous localization and mapping (SLAM), as an extension of the single robot problem. However, the general idea is older and different approaches exist in literature, often relying on a sharp division of roles within the team. Cooperative sensing can include tracking targets with multiple observers (Parker 2002), helping each other in reaching target locations (Sgorbissa & Arkin 2003), or the pursuit of an evader with multiple pursuers (Vidal et al. 2002). Notice also that cooperative sensing often relies on cooperative motion, and therefore they cannot be considered as totally disjoint classes of problems. 82 CHAPTER 3: MOBILE ROBOTIC SYSTEMS In decentralized approaches, instead, the resources are distributed among all the robots. Each vehicle uses its own sensors to extrapolate local information of the environment and the relative positions of the close robots to take its own decisions; moreover, each vehicle can communicate and share information only with the close vehicles and it is aimed at achieving only a part of the global mission. Behaviour-based, auction based, or similar distributed approaches (with or without explicit communication) are very common; this happens for the reasons already discussed, not last the fact that optimality in task and role assignment is computationally very expensive to achieve, and consequently inappropriate for real-time operation. (x1, y1) (x2, y2) u2, 2 2 u1, 1 1 d12F Robot 1 ΘF d13F Robot 2 u3, 3 (x3, y3) y x 3 Robot 3 Figure 3.12: Basic Scheme of a Multi-robot System Advantages and disadvantages of centralized and decentralized systems have been object of several discussions in the scientific community. Centralized systems, for example, can manage global information of the environment and optimize the coordination among the robots or the accomplishment of the mission; moreover, they can easily manage faults of some of the robots. On the other hand, the core unit may represent a weakness of the system, in fact, it can be the bottle-neck of the system both for computational and communication time requirements; moreover, its eventual fault compromises the whole system. Decentralized systems, instead, permit to take all the advantages of distributed sensing and actuation, i.e., make possible to use less powerful robots or to use more, cheaper sensors; they permit to optimize the allocation of the resources and to equip the robots of the team with different actuation and sensor systems; moreover, decentralized systems can easily result tolerant to possible vehicles faults. On the other hand, within decentralized systems it is DYNAMIC CONTROL OF MOBILE ROBOTIC SYSTEMS 83 difficult to coordinate the robots and optimize the execution of the mission, and problems like global localization and mapping, and communication bandwidth represent limits of this system (Arrichiello 2006). In practice, many systems are not strictly centralized/decentralized. In fact, many largely decentralized architectures utilized leader agents (Tanner et al. (2004)); moreover, different hybrid centralized/decentralized architectures were presented to take partial advantages of both the typologies (the hybrid architectures in Das et al. (2002); Feddema et al. (2002); Stilwell et al. (2005), have central planners that perform an highlevel control over mostly autonomous robots). The research in multi-robot systems has matured to the point where systems with hundreds of robots are being proposed (Howard et al. (2006); Parker (2003)). To achieve a given task, the robots have to share information, thus, the increasing of the team dimension directly requires an increasing of the needed resources (e.g., time, sensory efforts and communication bandwidth). In this sense, all the communication characteristics like the topology of the network, the communication bandwidth, the message coordination strategy, the traffic of information among robots and remote units represent open issues for mobile robot applications. The term scalability can represent both static and dynamic scalability. That is, a system is statically scalable when the control architecture can be kept exactly the same whether thousands of robots are deployed or only few are used; a system is dynamically scalable when robots can be added to or removed from the system on the fly; they may also have the ability to reallocate and redistribute themselves in a self-organized way. The scalability properties can be used as evaluation parameters for multi-robot systems. Moreover, among evaluation parameters, robustness, rather than efficiency, is promoted. In fact, a multi-robot system may result robust to malfunctions like unreliable communication and robot failures. Moreover, a multi-robot system may be robust to a priori unknown environmental and team changes not only through unit redundancy but also through a balance between exploratory and exploitative behavior. 84 CHAPTER 3: MOBILE ROBOTIC SYSTEMS 3.4.3 Architectures of Formation There exist a large number of publications on feedback in the fields of cooperative control of autonomous systems—recent results are found in Beard et al. (2001), Nijmeijer & Rodriguez-Angelez (2003), Fax & Murray (2004), Spry & Hedrick (2004), Ögren et al. (2004), Kingston et al. (2005), Kumar et al. (2005), and references therein. A recent survey paper by Ren et al. (2005) connects various coordinated control problems with consensus problems known from other scientific fields. While the applications are different, some common fundamental parts can be extracted from the many approaches to vehicle formation control. Roughly three approaches are found in the literature. Leader-Follower. Briefly explained, the leader-following architecture defines a leader in the formation while the other members of the formation follow that leader’s position and orientation with some prescribed offset. One of the first studies on leader-following formation control for mobile robots is reported in Wang (1991). Sheikholeslam & Desoer (1992) formulate decentralized control laws for the highway congestion problem using information from the leader’s dynamics and the distance to the proceeding vehicle. Variations on this theme include multiple leaders, forming a chain, and other tree topologies. This approach has the advantage of simplicity in that the internal stability of the formation is implied by stability of the individual vehicles, but is heavily dependent on the leader for achieving the control objective. Over-reliance on a single vehicle in the formation may be disadvantageous, and the lack of explicit feedback from the formation to the leader may destabilize the formation. A leader-follower architecture for marine craft has been approached in Encarnação & Pascoal (2001a), where an autonomous underwater vehicle is forced to track the motion of an autonomous surface craft, projected down to a fixed depth. Behavioral Methods. The behavioral approach prescribes a set of desired behaviors for each member in the group, and weighs them such that desirable group behavior emerges without an explicit model of the subsystems or the environment. Possible behaviors include trajectory and neighbor tracking, collision and obstacle avoidance, and formation keeping. One paper that describes the behavioral approach for multi-robot teams is Balch & Arkin (1998) where formation behaviors are implemented with other navigational behaviors to derive control strategies for goal seeking, collision avoidance and formation maintenance. In formation control, several objectives need to be met and from the behavioral approach it is DYNAMIC CONTROL OF MOBILE ROBOTIC SYSTEMS 85 expected that averaging the weighted (perhaps competing) behaviors give a control law that meets the control objectives. This approach motivates a decentralized implementation where feedback to the formation is present, since a vehicle reacts according to its neighbors. When the behavioral rules are given as algorithms, this approach is hard to analyze mathematically: the group behavior is not explicit, and characteristics such as stability cannot generally be guaranteed. Systemtheoretic approaches to behavioral control can be found in Stilwell & Bishop (2002) and Antonelli & Chiaverini (2004). The authors use a set of functions and control techniques for redundant robotic manipulators given in Siciliano & Slotine (1991) to control a platoon of autonomous vehicles. Different tasks can be merged, according to their priority, with an inverse kinematics algorithm. Virtual Structures. In the virtual structure approach, the entire formation is treated as a single, virtual, structure and acts as a single rigid body. The control law for a single vehicle is derived by defining the dynamics of the virtual structure and then translate the motion of the virtual structure into the desired motion for each vehicle. Virtual structures have been achieved by, for example, having all members of the formation tracking assigned nodes which move through space in the desired configuration, and using formation feedback to prevent members leaving the formation as in Beard et al. (2001) and Ren & Beard (2004). In Egerstedt & Hu (2001) each member of the formation tracks a virtual element, while the motion of the elements is governed by a formation function that specifies the desired geometry of the formation. This approach makes it easy to prescribe a coordinated behavior for the group, while formation keeping is naturally assured by the approach. However, if the formation has to maintain the exact same virtual structure at all times, the potential applications are limited. Skjetne, Moi & Fossen (2002) create a virtual structure of marine surface vessels by using a centralized control law that maneuvers the formation along a predefined path. 3.4.4 Configuration of the Formation Depending on its current coordination goal a formation can have many different shapes. For example, marine surface vessels can be in a side-byside formation during underway replenishment operations or in a V formation during transit (to save energy). Thus, formation control systems should be able to encompass changing configurations during operation. In addition, with a stable dynamic formation topology, vehicles are permitted 86 CHAPTER 3: MOBILE ROBOTIC SYSTEMS to leave and join the formation without changing the formation stability properties. This can further be extended to allow formations to split and merge. Dynamic topologies are considered in, e.g., Tanner et al. (2004), Fax & Murray (2004), Olfati-Saber & Murray (2004), and Arcak (2006). When several control systems are to be coordinated, information must be exchanged between them in order to complete the control task. Ren et al. (2005) states the following intuitive axiom: Shared information is a necessary condition for coordination. The amount of communicated information depends on the coordination task: if two systems must synchronize their position, some information about the other systems position must be known. If the goal is synchronized motion (both position and velocity), the systems must also share information about their velocity. The coordination goal might be: assembling into a desired formation configuration, ending up at a given location at an appointed time, or synchronized motion. An alternative to sharing both position and velocity information during operations is to consider synchronized paths which incorporates information of not only position but also velocity and acceleration assignments. Thus, motion can be coordinated with a smaller amount of shared information since a position on the path implies fixed speed and velocity assignments. In order to achieve proper synchronization, the individual paths must be coordinated at the start of the operation. Information must be exchanged over a communication channel. Typically, for a set of independent vehicles, a communication protocol is set up over a physical medium, e.g., using radio-, acoustic, or optic signals. Moreau (2005) study multi-agent systems with time-dependent communication links; Olfati-Saber & Murray (2004) investigate consensus problems with time-delays. Standard communication protocols offer robustness to signal loss, delays, etc., but communication issues such as inconsistent delays, noise, signal dropouts, and possible asynchronous updates should be taken into account in the formation control architecture. 3.4.5 Multi-Agent Systems The multi-agent system concepts appeared recently and it is extremely distributed in all research areas; to solve problems by many agents cooperation. These agents, which have been using for multi-agent system, are defined as an entity; software routine, robot, sensor, process or person, DYNAMIC CONTROL OF MOBILE ROBOTIC SYSTEMS 87 which performs actions, works and makes decision. In human society concepts, the cooperation means ―an intricate and subtle activity, which has defied many attempts to formalize it‖. Artificial and real social activity in social systems is the paradigm examples of Cooperation. In multi-agent concepts side, there are many definitions for cooperation; the most popular definitions are 1. The multi-agents working together for doing something that creates a progressive result such increasing performance or saving time. 2. One agent adopts the goal of another agent. Its hypothesis is that the two agents have been designed in advance and, there is no conflict goal between them, furthermore, one agent only adopts another agent aim passively. 3. One autonomous agent adopts another autonomous agent goal. Its hypothesis is that cooperation only occurs between the agents, which have the ability of rejecting or accepting the cooperation. An agent is a computer system that is capable of autonomous action on behalf of its user or owner in some environment in order to meet its design objectives. An intelligent agent is a computer system capable of flexible autonomous action in some environment. By flexible, it means: reactive, pro-active and social. A static environment is one that can be assumed to remain unchanged except by the performance of actions by the agent. A dynamic environment is one that has other processes operating on it, and which hence changes in ways beyond the agent control. The assumed task condition is like a group of mobile robots are randomly allocated in an unknown area, the area has limited boundary and may contain some obstacles. The robots should be finally programmed to have ability to find their way out to gather at certain desired place in that area; ability to plan their path and avoid collisions; ability to communicate with each peer robot and the server through wireless network; ability to from a line and move alone a line, from a circle and move surround a circle.