Browsing by Author "James C. Lester, Committee Member"
Now showing 1 - 5 of 5
- Results Per Page
- Sort Options
- Bowyer: A Planning Tool for Bridging the Gap between Declarative and Procedural Domains(2008-02-03) Cash, Steven Patrick; R. Michael Young, Committee Chair; Robert St. Amant, Committee Member; James C. Lester, Committee Member
- A Computational Model of Narrative Generation for Surprise Arousal(2009-07-28) Bae, Byung Chull; R. Michael Young, Committee Chair; James C. Lester, Committee Member; Brad Mehlenbacher, Committee Member; Robert Rodman, Committee MemberThis dissertation describes work to develop a planning-based computational model of narrative generation designed to elicit surprise in the mind of a reader. To this end, my approach makes use of two narrative devices – flashback and foreshadowing. While surprise plays an important role for attention focusing, learning, and creativity, little effort has been made to build a computational framework for surprise arousal in narrative. In my computational model, flashback provides a backstory to explain what causes a surprising outcome, while foreshadowing gives hints about the surprise before it occurs. In this work I focus on the arousal of surprise emotion as a cognitive response which is based on a reader's cognitive appraisal of a given situation. In this dissertation I present Prevoyant, a planning-based computational model of surprise arousal in narrative generation, and analyze the effectiveness of Prevoyant. To build a computational model of the unexpectedness in surprise, I adopt a cognitive model of surprise based on expectation failure. There are two contributions made by this dissertation. First, I present a computational framework for narrative generation designed to elicit surprise. The approach makes use of a two-tier model of narrative and draws on Structural Affect Theory, which claims that a reader’s emotions such as surprise or suspense are closely related to narrative structure. Second, I present a methodology to evaluate surprise in narrative generation using a planning-based approach based on the cognitive model of surprise causes. The results of the experiments that I conducted show strong support that my system effectively generates a discourse structure for surprise arousal in narrative.
- Interorganizational Business Interactions: Contracts, Processes, Evolution(2008-01-03) Desai, Nirmit Vikram; Munindar P. Singh, Committee Chair; Cecil C. Bozarth, Committee Member; James C. Lester, Committee Member; Laurie A. Williams, Committee Member
- A Schematic Representation for Cognitive Tool-Using Agents(2009-05-13) Mu, Wei; Robert St. Amant, Committee Chair; Ronald P. Endicott, Committee Member; James C. Lester, Committee Member; R. Michael Young, Committee MemberIn artificial intelligence (AI) research, embodied systems have received increasing attention since the 1990s. How to bridge the gap between raw sensorimotor data and symbolic representation in a robotic agent is still an open question. The research described in this document is inspired by theories in cognitive science, such as concept theory and embodied realism, as well as work in robotics and AI. The general goal of this research is to build a system capable of acquiring and maintaining semantic knowledge for higher-level reasoning, in particular reasoning about the use of tools, from the embodied experience of a cognitive agent in a simulated environment or in the real world. This research addresses cognitive theories of embodiment, the design of a general computational architecture, and the design and implementation of AI techniques for solving tool-using problems. One of the major contributions of this research is to provide a computational architecture for an embodied agent that can capture semantic relations from its interactions with the world, sufficient to support effective tool use both in short-term predictions and plan generation. As a result, we have implemented an example of this architecture in an Action Schema Generator, or ASG, which can automatically generate production rules and symbolic representations from a simulated agent’s embodied experience without losing the capability of transferring the knowledge backwards to its original numerical sensorimotor format. We have developed pragmatic methods to evaluate the performance of ASG, at the component level and the system level, in simulated and real scenarios, for tasks with and without tools. We also have compared our design with other robotics and cognitive architectures, including behavior-based robotics, Neuroevolution, and psychologically inspired architectures. We believe that our work can provide a general foundation for embodied agents, and should be useful in future research.
- A Task-based Framework for the Quantitative Evaluation of Input Devices(2004-01-27) Dulberg, Martin S.; Eric N. Wiebe, Committee Member; David F. McAllister, Committee Co-Chair; James C. Lester, Committee Member; Robert St. Amant, Committee Co-ChairThis research describes the development of a conceptual framework and methodology that will permit the evaluation of input devices in graphical user interfaces in a more meaningful context than previous studies. We provide a procedure for the reuse of performance characteristics in expanded domains. Individual performance differences are analyzed and their implications for current evaluation methods are discussed. We have built an interactive simulation for domain-independent testing of the suitability of different input devices for specific tasks, based on the demand characteristics of the task and the performance characteristics of the device. The simulation will allow researchers and practitioners to evaluate the suitability of particular input devices in a given user interface with a severely restricted role for usability testing. Using the system, it will be possible to select a device based upon optimal task completion time or estimated error rate. The role of inter-task transition times is explored. A methodology for prediction of performance with the use of execution graphs is described.
