# «The Robotics Institute Carnegie Mellon University Pittsburgh, PA 15213 May 13, 2005 Submitted in partial fulfillment of the requirements for the ...»

2.1 Deterministic Path Planning The objective of robot path planning is to determine a trajectory that achieves a goal state while avoiding contact with obstacles. Path planning typically occurs in the configuration space, whose dimensions each correspond to a degree of freedom in the robot. Motion sequences in the real world define trajectories in configuration space. Furthermore, obstacles in the real world map to regions in configuration space encoding the ranges of state variables over which motion is not legal. Path planning algorithms follow three basic approaches - cell decomposition, roadmap methods, and potential field techniques. Latombe [38] treats each in significant depth. A generalization of the problem assigns weights (non-uniformly) to all regions of the configuration space. The cost over a trajectory is the integration of the weights over the path. Obstacles have infinitely large weights, and free space has weight of zero. The planning problem then becomes to find the path that minimizes the cost between two points.

2.1.1 Cell Decomposition Defining a regular grid or lattice that decomposes a space into regularly-spaced regions is formally called approximate cell decomposition. Grids are favored from a software implementation standpoint since they naturally map to array data structures and cell adjacency is implicitly encoded. Because a grid pattern only approximates the boundaries of free space, approximate cell decomposition techniques are incomplete. At the expense of time and memory, grids can be made arbitrarily fine, but where large regions of a space are homogeneous, grids are particularly memory inefficient. In many representations, motion in a regular grid is constrained to either the four-connected or eight-connected graph directly linking adjacent cells. As a result, planning in a grid cannot find the minimum-distance path between two points unless they lie along the directions of allowable motion.

** RELATED WORK**

Multi-resolution grids extend the regular grid approach to improve memory efficiency for spaces that contain large regions of free space and obstacles. In two dimensions, a space is initially divided using the coarsest possible square cells. Those cells that intersect both free space and obstacle space are subdivided into four equal cells each half the size of the original. The process continues until remaining cells sharing free and obstacle space are of the minimum allowable cell size and cannot be divided further. The decomposition can be represented as a quadtree, where parent nodes are large cells and child nodes are the smaller cells. The result is that large regions of free space are represented efficiently, and borders between free and obstacle space are represented finely. Szczerba and Chen developed a further refinement, the framed quadtree [68], which provides highest-resolution cells along the borders of all coarse cells to enable closer approximations to the shortest-distance path on a multi-resolution grid.

Used first by Chatila [8], exact cell decompositions divide a space into cells whose union is exactly the free space.

The borders of the cells typically correspond to the borders of free space, or mark other properties of the obstacle shape. Because the cells match the free space, the technique is complete provided the search over the cell connectivity graph is complete.

None of the cell decomposition approaches were conceived with natural terrain path planning in mind. Though one could imagine a specialized decomposition for natural terrain, approximate decomposition techniques, especially regular grids, are attractive for their simplicity. Perhaps more importantly, existing data for natural terrain is almost universally stored in raster-patterned maps that form a regular grid.

2.1.2 Roadmap Approaches Voronoi diagrams are examples of topological mappings, called retractions, that map the free configuration space to a one-dimensional subset [48]. The retraction is a “roadmap” graph that can be searched for a feasible or optimal path.

Voronoi diagrams have the advantage of maintaining the greatest distance between a robot following the diagram and obstacles. Extensions to higher than two dimensions are available but are far more complicated to define. It is not clear how the Voronoi approach might be used in weighted spaces or in the context of natural terrain.

In two-dimensional spaces with polygonal obstacles, the visibility graph approach introduced by Nilsson [46] defines line segments between all obstacle vertex pairs that are “visible” to each other (i.e. whose inter-vertex line segments cross only free space). Adding the start and goal state as extra vertices, path planning becomes searching the resulting graph for the sequence of edges that connects the start and goal. The graph is guaranteed to contain the minimumdistance path between points. The approach is simple for two-dimensional planning around obstacles with distinct vertices, but is poorly defined in spaces with rounded obstacles, and may not lead to efficient paths in higher-dimensional spaces.

One might also envision a specialized roadmap for travel on natural terrain.

2.1.3 Potential Fields Potential field algorithms [33] plan by superposing an attractive field centered at a global goal with repulsive fields surrounding obstacles. A “plan” simply follows the steepest resulting gradient to the goal. The approach is extremely simple to encode, but under the greedy, steepest descent strategy, is vulnerable to local minima, and hence is incomplete.

2.2 Randomized Path Planning Randomized search techniques were developed to address motion planning for robots with many degrees of freedom (e.g. snake robots), and hence that have high-dimensional configuration spaces. Randomized algorithms trade a systematic, optimal approach for exploring a space for a stochastic approach that enables rapid exploration of very large state spaces. While deterministic, systematic search is complete and sometimes optimal, randomized techniques have probabilistic bounds on time to reach a solution, and are generally not optimal. However, for high-dimensional problems where finding any feasible path is sufficient, randomized approaches are very effective.

2.2.1 Rapidly Exploring Random Trees LaValle’s rapidly-exploring random trees (RRT’s) are incrementally constructed search trees that attempt to rapidly and uniformly search the state space [36]. Their benefits include guaranteed convergence to a uniform coverage of any non-convex space. The basic technique seeds a graph with a node at the start state. A new point is randomly selected from a uniform distribution over the search space. In a greedy fashion, the node in the graph nearest to the random point is then expanded in the direction of the point to a non-collision state, becoming a new node in the graph. The process continues until the graph connects the start state to the goal state. Despite the greedy approach, the algorithm avoids local minima in high dimensions. Kuffner and LaValle present a follow-on algorithm, RRT-Connect [35], which grows an RRT from both the start and goal and attempts to greedily connect them together. Though non-optimal and incomplete, the algorithms are probabilistically complete and are efficient in practice for spaces with high dimensions (e.g. 10 or more).

2.3 Temporal Path Planning Temporal path planning seeks time-optimal solutions, typically in a dynamic environment. The bulk of research in this area has been directed towards robotic manipulation and simple vehicle control, including dynamics and moving obstacles.

Bobrow et al. [5] approach the time-optimal path problem for a manipulator by dividing the problem into two steps.

First, the path of the manipulator end effector is planned, using kinematic constraints, to avoid obstacles in the enviRELATED WORK ronment. Second, an algorithm solves for the optimal velocity profile, consisting of alternating segments of maximum acceleration and deceleration (bang-bang control). The algorithm considers kinematics, full manipulator dynamics, and joint torque limits that are arbitrary functions of the manipulator state. The dynamics and constraints result in a maximum velocity profile over the path. The optimal velocity profile follows the maximum velocity as closely as possible using the bang-bang approach. Though the approach provides a convenient way to decompose the optimal control problem, the assumed initial path may have a strong influence on the resulting maximum allowable velocity profile, and hence on the optimal solution. A global consideration of path and velocity might lead to significant improvements in performance. The Shiller and Gwo paper mentioned earlier [58] presents the analog to [5] for a robotic vehicle on natural terrain. However, unlike in the manipulation paper, the authors initially solve for the top several minimum-distance paths, then determine the optimal velocity profiles for each. This acknowledges that the initial path selection is not straightforward and that the minimum-distance path is not necessarily the optimal-time path.

Fraichard [17] introduces the concept of state-time space to solve temporal path plans for a 2 DOF non-holonomic vehicle. As in the work by Bobrow et al. [5], a path is selected in free space that avoids all static obstacles. Defining a parameter of path length along the path, the author defines a state-time space comprising dimensions in the parameter, its first derivative and time. Dynamic obstacles can be represented as volumes in this new space. A second step discretizes the space and evaluates canonical trajectories in terms of dynamic and collision constraints to derive approximately time-optimal trajectories. The obvious shortfall is that the path is pre-selected with no consideration of vehicle or obstacle dynamics. However, this thesis draws upon the notions of a state-time space or configurationtime space.

Fiorini and Shiller [16] define the concept of velocity obstacles to generate paths that avoid obstacles using the velocity space. The approach avoids path integration to predict robot position, but rather defines regions of the velocity space that predict imminent collisions. By superimposing regions of velocity that are accessible based on dynamic constraints, the technique is able to generate velocity profiles that avoid the obstacles with physically achievable maneuvers. Though not guaranteed to be optimal, this approach cleverly removes some of the complication of representations that consider the state-time space. It performs well for obstacle avoidance, but is not suited to the problem of planning through dynamically-varying cost regions.

By operating at a level in which rover dynamics can be ignored, the research in this thesis is able to avoid some of the complexity addressed in the above examples. However, time-optimality, and certainly time-dependence will remain important in many applications, and must be balanced against different competing factors, particularly finite resources or other performance constraints and mission objectives. All the above approaches take a hierarchical approach: the path is selected based only on avoiding static obstacles. The velocity profile is then selected to avoid

the dynamic ones. This is seen as a substantial shortcoming. In the path-based resource management problem, path and timing must be considered simultaneously.

2.4 Resource Path Planning In the realm of analyzing rover resource collection and usage, Shillcut [57] combines terrain, rover and ephemeris models to determine the evolution of solar array sun exposure and locomotion power for various coverage search patterns. The research rates the coverage patterns based on driving distance each requires as compared to solar energy collection each enables, as a function of time and rover orientation. Though the work stops short of planning paths based on this analysis, it provides motivation for integrated path planning and resource management.

Several other authors have considered planning to minimize the energy expense (work) over a terrain path [52][54][67]. These are described in more depth in Section 2.7.2.

2.5 Path Planning in Unknown Environments Nourbakhsh and Genesereth propose an assumptive planning architecture to create limited conditional plans [47]. A complete conditional plan must consider all possible initial conditions, percepts and actions in a search for an optimal plan. As an alternative, the authors suggest a principled way to make simplifying assumptions about possible initial conditions, the effects of actions and the state of the robot given percepts, and yet to preserve plan correctness and goal-reachability in the face of wrong assumptions. In short, each function must return a proper subset of the actual states or action effects. Along with functions that make assumptions about initial state, actions and percepts, the architecture requires a function that can detect irreversible chains of actions (to prevent making an irreversible incorrect decision), and another function that detects false goals (to identify cases where actions must continue despite the sensors’ indication of goal completion). The authors present an algorithm that interleaves planning and execution so that at each step, the robot executes the first action, senses its environment and re-plans the remainder. This reduces the requirements on the irreversible chain detector to look only one step ahead. The strategy was used in several successful robot control architectures in indoor environments, allowing the robots to act optimally in navigation tasks.

Stentz presents the D* algorithm, which is designed for global path planning in partially-known environments. It is a heuristic search algorithm, like A*, that visits the minimum number of states in finding an optimal solution, but generalizes to enable optimally efficient path repair in response to changing cost information [63][64][65]. Where A* is forced to plan from scratch if any state transition cost changes, D* determines which nodes in the graph are affected by the changes, and isolates the repairs to those nodes. The effect is a dramatic improvement, as many as two orders of magnitude, in speed over initial planning for cost map changes local to the robot. The algorithm has been used successfully in a number of real-world applications. An algorithm presented by Koenig and Likhachev [34], D* Lite, has more recently achieved even better performance than D* under a simpler design that more closely mimics A*.