LMU ☀️ CMSI 3300
ARTIFICIAL INTELLIGENCE
Practice
  1. Why is AI not concerned with clairvoyant agents?
  2. Write PEAS descriptions for
    1. A music composer
    2. An aircraft autolander
    3. An essay evaluator
    4. A robotic sentry gun for the Keck Lab
    5. Shopping bot for an offline bookstore
  3. Categorize each of the following along each of the six dimensions from Russell and Norvig (fully/partially observable, deterministic/stochastic, episodic/sequential, static/dynamic, discrete/continuous, single/multi agent)
    1. A music composer
    2. An aircraft autolander
    3. An essay evaluator
    4. A robotic sentry gun for the Keck Lab
    5. Shopping bot for an offline bookstore
  4. Describe a task environment in which a pure reflex agent can be perfectly rational.
  5. If the pure reflex vacuum cleaner agent from the text had a performance measure in which two points were given for each square cleaned, and one point was subtracted for each movement from one square to the other, and the squares never became dirty once cleaned, describe an agent function that would make an agent rational.
  6. Suppose the vacuum cleaner world was extended to have four squares in a walled-in 2 x 2 grid, and two new actions up and down. Suppose also the performance measure was +5 for cleaning a square, -4 for sucking in a clean square, -2 for making a move. A clean square will become dirty if the vacuum enters it for the nth time where n mod 4 = 0. The vacuum can only sense whether the current square it is in is dirty or clean, but it cannot sense its current location. It can attempt a move action and receive a bump percept if the move runs in to an outer wall. Bumping into a wall will leave the agent in the same square, but does not increase the number of times the square is considered entered.
    1. Ignoring belief states, how many states are there in a problem formulation of this world?
    2. Give an agent program for this world that would make this agent rational. Use pseudocode, but be fairly precise. Be sure to describe the model the agent constructs. It may be helpful to sketch (a portion of) the belief state graph.
  7. Give an example of a road navigation problem for which uniform cost search fails if an algorithm does the goal check on node generation as opposed to goal expansion.
  8. What is the fewest possible number of nodes, in terms of b, d, and m, that will be generated for each of the following? Assume that all nodes have exactly b children. If the strategy can have both goal-test-at-generation-time and goal-test-at-expansion variants, answer for both.
    1. depth-first search with backtracking
    2. depth-first expansion
    3. breadth-first
    4. uniform-cost
    5. depth-first iterative deepening
  9. Recall that for the A* algorithm, the evaluation function, f, for a node n is such that f(n) = g(n) + h(n) where h(n) is a non-overestimating estimate of the cost to reach a goal from n. What interesting things can you say about the behavior of the algorithm when
    1. h is a perfect estimate
    2. h(n) is zero for every n
  10. Prove that A* is optimally efficient for any given heuristic function.
  11. How many reachable states are there for the 24-puzzle?
  12. Is it reasonable to try to define the 24-puzzle problem as a constraint satisfaction problem? Why or why not?