# «The Robotics Institute Carnegie Mellon University Pittsburgh, PA 15213 May 13, 2005 Submitted in partial fulfillment of the requirements for the ...»

2.9 Summary Mission-directed path planning seeks to solve for paths in a context historically addressed in the AI planning and scheduling literature. Path planning is most naturally solved in a state space formulation; the planning and scheduling community has almost universally used spaces of possible plans. Planning and scheduling software deals very effectively with temporal and resource constraint problems, but has not addressed problems where spatial dimensions are so critical, as in path planning. Specifically, planning and scheduling has been most successful at satisfying resource constraints, but has not typically sought to optimize resource usage. Optimization of state parameters is the domain of the state space. Meanwhile, the path planning community has addressed temporal planning in a hierarchical fashion that prevents globally-optimal solutions, and has limited its treatment of the resource management problem to resources that are consumed monotonically over a path.

As demonstrated by ASPEN and CASPER, AI planning and scheduling software can call a path planner as a subroutine to handle mobile phases of a larger activity plan. In scenarios where traverses are relatively short-distance and duration, this type of hierarchy might suffice. But in situations where traverse activities consume a large share of the time and resources in a mission plan, the system cannot tolerate a path planner that is naive to time, resources and constraints. Addressing the combined problem of path planning, temporal planning and resource planning is a central contribution of this thesis.

This thesis elects to follow the path planning paradigm. The following is a brief list of problems in mission-directed

**path planning that are largely absent in the robot path planning literature but have been addressed in this thesis:**

• Efficient, optimal planning and re-planning, under global constraints, in a state space of greater than two dimensions.

• Simultaneous, global consideration of highly-coupled route, time and resource variables.

• Incorporation of time-varying anisotropic costs and constraints.

• Capability to reason about, constrain and optimize non-monotonic (expendable and replenishable) resource variables.

## RELATED WORK

• Representation of actions, either coupled with navigation actions or distinct, and both mobile and stationary, that enable resource management or achieve non-navigational mission goals.

The Incremental Search Engine by Stentz and Tompkins embodies several attributes that are highly desirable or even essential to solve the mission-directed path planning problem. ISE offers the optimality guarantees and re-planning efficiency of D*, which has proven itself in many robot applications. It is also more space and time efficient in addressing higher-dimensional state spaces than other existing algorithms. This is critical if we expect to consider time and resources. Furthermore, ISE enforces global constraints on paths, for example bounds on battery energy.

The next chapter describes ISE in far greater detail as background for presenting a practical solution to missiondirected path planning.

** **

3. Incremental Search

This chapter describes one of the foundations for this work - an algorithm developed by Stentz and Tompkins, called Incremental Search Engine (ISE), that satisfies many of the demands of mission-directed path planning. ISE provides several enhancements over previous algorithms that are critical in mission-directed path planning. ISE is a generalization of the D* algorithm (see Section 2.5 or [63][65]) that retains the planning and re-planning efficiency of Focussed D* [63], but enforces global constraints and operates efficiently in state spaces with greater than two dimensions. Mission-directed path planning is the first and most exhaustive application of ISE.

ISE is a backwards-chaining algorithm that starts its search from one or more goal states, and finds an optimal path that “ends” at the specified start state. Like A* and D*, ISE maintains a list of states to “expand” to propagate the search to new states. These states, beginning with the goals, are prioritized for expansion according to an objective function which accumulates the cost from each state to the goal, and a heuristic function which estimates the cost from the start state to each of the states. State expansion generates all possible backward “arcs”, representing actions in the planning domain, from the final state to several initial states originating the actions. The new states are placed on the list, and prioritized according to their own value. The optimal path emerges upon expanding the start state.

Further machinery within the prioritized state list enables ISE to efficiently repair path solutions when the transition arcs change during a path execution.

The following sections describe the salient features of the algorithm. Section 3.1 describes how ISE encodes application domains. Section 3.2 explains several efficiency mechanisms which speed search and minimize state proliferation. Section 3.3 provides an overview of the search process (Appendix 1 describes the ISE algorithm in detail).

Throughout these sections, it should become clear that ISE is a general incremental search algorithm that can be applied to a wide range of problems. A user must define state space variables and application-specific functions of state to compute state transitions between states, heuristics, and state priorities, and other quantities. Section 3.4 experimentally demonstrates how ISE performance varies as these parameters change.

** INCREMENTAL SEARCH**

3.1 States, Transitions and Cost A*, D* and other incremental graph search algorithms are best suited for two-dimensional grid planning, or in domains where states are abstract nodes in a graph. In factored spaces of greater than two dimensions, existing algorithms fail to provide a simple means of encoding state spaces. ISE allows a natural representation of state space in terms of discrete independent and dependent parameters.

3.1.1 Independent and Dependent State Parameters ISE allows two types of state variables - independent parameters (IPARMS) and dependent parameters (DPARMS).

IPARMS are the foundation of the ISE state space and search and are essential to defining a problem domain.

Changes in IPARMS are independent of other state parameters. A domain may optionally include DPARMS, parameters whose values change as a function of changes in IPARMS or other DPARMS. DPARMS for a state are stored in a set corresponding to the state’s IPARMS. An ISE state combines the IPARMS and DPARMS of the domain.

For example, a robot control problem might entail planning a path through a two-dimensional office area, where obstacles are time-varying. The state space might define two independent spatial parameters X and Y that are positions in a regular grid, and a dependent temporal dimension T, such that ∆T = f ( ∆X, ∆Y ). The ISE state is an ( x, y, t ) tuple. ISE would represent this domain by encoding a regular grid of sets, each corresponding to an x-y cell, that store values of t. Each t value would represent a different state ( x, y, t ).

3.1.2 State Transitions Arcs are transitions between two ISE states. ISE uses two user-defined arc transition functions, one backwards ( β ) and one forwards ( Φ ), to model state transitions in a domain. Given a state space S and action space A, the arc transition functions define deterministic mappings S × A → S that obey the Markov assumption, that is, the resulting state is a function only of the action and the originating state. Given a state and a choice of action, the transition arc functions define the change in IPARMS and the corresponding change in DPARMS. The backward function β takes a final state and produces the initial state from which the action was executed. In many cases, the forward function Φ

The forward transition function Φ is not always single-valued. ISE state variables can be constrained to minimum or maximum values. For example, a state variable b might represent battery charge which can never rise above the battery capacity b max. Function β could legally take the form b initial = min ( b final – i∆t, b max ), where i and ∆t are the the charging current during the action and the action duration, repectively. Now, suppose there is an action A where

i is negative (signifying a battery discharge in the forward-time direction). If b final ≥ b max – i ∆t, then b initial = b max.

Clearly, then, the forward transition function Φ for action A cannot unambiguously predict b final from b initial = b max. Instead, Φ must predict the range of legal states that are possible from an initial state. Using the above example, Φ could take the form b final = b initial + i∆t for b initial b max, and b final ∈ [ b initial + i∆t, b max ] for b initial = b max.

Arc parameters, or APARMS, govern the effects of transitions. Each IPARMS combination has APARMS that help define the transitions into or out of the states with those IPARMS. Most importantly, APARMS are parameters that are allowed to change during plan execution. As discussed in upcoming sections, re-planning enables ISE to efficiently repair a plan whose underlying APARMS have changed.

3.1.3 Local Constraints Local constraints are a generalization of obstacles in the path planning literature. The satisfaction or violation of local constraints depends solely on the specific state and the action executed to or from that state; it does not depend on state or action history. Obstacles in traditional path planning are local constraints on locomotion actions. Local constraints extend obstacles by limiting arbitrary actions over any range of states in the state space. Through the application-specific (user-defined) arc transition functions β and Φ, ISE permits or rejects arcs into states that violate local constraints. Rejected states cannot become part of any ISE path solution.

In the running example, the office robot would be constrained to avoid people and closed doors, as defined by local constraints at specific ( x, y, t ) regions of the state space. The office environment might further restrict its operation to certain hours of day, a purely temporal local constraint.

3.1.4 Global Constraints In contrast to local constraints, which operate on state parameters, global constraints operate on parameters that integrate over the history of state or actions. Examples include path distance, duration, and capacity limitations on consumable or replenishable resources. ISE enforces global constraints in one of two ways. ISE stores global constraint parameters through auxiliary variables stored alongside DPARMS state parameters or, if the parameter is to be optimized, in the objective function (see Sections 3.1.6 and 3.1.7). If stored alongside the DPARMS parameters, the global constraint parameters are evaluated inside the β state transition function. When a violation occurs, the β function rejects the arc. Alternatively, if the parameter is stored in the objective function, ISE cannot reject the arc, but assigns a very high cost to the arc to reduce its chances of entering into any optimal solution. Under this second approach, if no cheaper legal path exists, ISE will yield the least expensive illegal path.

** INCREMENTAL SEARCH**

What are the implications of local and global constraint parameters? The addition of global constraint parameters does not add dimensions to the state space. However, under the Markov assumption, aside from rejecting arcs, global constraint parameters cannot legally have any effect on DPARMS or cost for state transitions. Furthermore, ISE can only guarantee planning completeness in the variables comprising the state space. If a parameter is not a member of the state, ISE cannot explore trajectories that vary the parameter to find feasible path solutions. Any global constraint can be transformed into a local constraint by promoting its parameter to a full DPARMS state parameter. The parameter can then affect state transitions and costs for legal state transitions, but at the computational and memory expense of an added dimension in the state space (see Section 3.4.2).

3.1.5 Resource Parameters Because of its flexible treatment of state, constraints and cost, ISE is particularly well-suited to reasoning about resources. A resource is a quantity that is essential to achieving a goal, and yet is in limited supply. ISE represents resources as constraint-limited parameters. ISE can be represent resources as DPARMS state variables, limited by local constraints, or as global constraint parameters alongside DPARMS or in the objective function.

Resource parameters can be divided into two categories - monotonic and non-monotonic. Monotonic resource parameters monotonically increase or decrease over the course of a trajectory. Examples of monotonic resources include non-rechargeable battery energy or finite-lifetime components. More abstractly, if a path is constrained to be less than a given distance or duration, then remaining distance or duration can also be considered resources. Planning problems often involve renewable resources - parameters that can be expended and replenished over different arcs, and hence are non-monotonic. Rechargeable energy, thermal load, available computer memory or communications bandwidth are all examples of non-monotonic resources.

In ISE, because of the backwards-chaining order of search, the semantics of resource parameters are different than for typical parameters. As with all DPARMS and objective function parameters, values for resource parameters must be initialized for each goal prior to search. Semantically, it is useful to treat these values as minimum allowable resource levels at the completion of a path. For example, a parameter to represent duration remaining to the goal (as a way of constraining duration over a path) might be set to zero. In words, at the start of a path if a robot is given an upper bound on duration to reach the goal, it is legal in the worst case to exhaust the entire duration “resource”, but no more.