Perceiving intelligent action: experiments in the interpretation of intentional motion
A remarkable characteristic of human perceptual systems is the ability to recognize the goals and intentions of other living things – “intelligent agents” – on the basis of their actions or patterns of motion. We use this ability to anticipate the behavior of agents in the environment, and better inform our decision making. The aim of this project is to develop a theoretical model of the perception of intentions, shedding light onto both the function of the human (biological) perceptual system, and the design of computational models that drive artificial systems (robots). To this end, an interdisciplinary group of IGERT students created a novel virtual environment populated by intelligent autonomous agents, and endowed these agents with human-like capacities: goals, real-time perception, memory, planning, and decision making. One phase of the project focused on the perceptual judgments of human observers watching the agents. Experimental subjects’ judgments of the agents’ intentions were accurate and in agreement with one another. In another experiment, the agents were programmed to evolve through many “generations” as they competed for “food” and survival within a game-like framework. We examined whether the ability of human observers to classify and interpret the intentions of agents improved as the behavior of successive agent generations became more optimal. Our results show that (a) dynamic virtual environments can be developed that embody the essential perceptual cues to the intentions of intelligent agents, and (b) when studied in such environments, human perception of the intentions of intelligent agents proves to be accurate, rational and rule-governed.