Research Abstract
--abstract for final research paper and poster--


The creation of autonomous agents is one of the main goals of AI. A key characteristic of such an autonomous agent is the ability to represent its environment internally and to extract rules and meanings from that internal representation. Agents meeting the first criterion have been modeled successfully, but meeting the second criterion has proven difficult. One approach that has met relative success is an algorithm known as Sensory Invariance Driven Action (SIDA). SIDA proposes that an agent can extract visual meaning from an internal representation of its environment using sensory invariance as a criterion for directing its gaze trajectory. Our work integrates SIDA with a camera to demonstrate the algorithm's ability to provide agents with a method for real-time autonomous learning. We will run experiments that test the agent's ability to interact with and extract meaning from simple, synthetic environments, and progress to experiments that test its interactions with natural environments, in real-time. We expect to show that SIDA is a highly plausible solution to the problem of autonomous grounding of agents, and one that can be implemented with relative ease and high learning rates.