Log In
New user? Click here to register. Have you forgotten your password?
NC State University Libraries Logo
    Communities & Collections
    Browse NC State Repository
Log In
New user? Click here to register. Have you forgotten your password?
  1. Home
  2. Browse by Author

Browsing by Author "R. Michael Young, Committee Member"

Filter results by typing the first few letters
Now showing 1 - 7 of 7
  • Results Per Page
  • Sort Options
  • No Thumbnail Available
    Effective Tool Use in a Habile Agent
    (2005-04-28) Wood, Alexander Burchi; Robert St. Amant, Committee Chair; R. Michael Young, Committee Member; Jon Doyle, Committee Member
    Tool use is a hallmark of human intelligence, which has not fully been explored in the artificial intelligence research community. Research in cognitive neuroscience on primates suggests that not only do we maintain a mental representation of our body, but the body schema is modified to include a tool during intentional tool use (Iriki et al., 1996). We have developed a habile (tool-using) agent, based on the Sony Aibo platform, that can pick up a stick and use it as a tool to reach objects previously out of its range. The agent uses a recurrent neural network developed by Steinkühler and Cruse (1998) for maintaining an internal body schema used to find appropriate postures for reaching and grasping tools. We argue that analysis of activities of such tool using agents offers an informative way to evaluate intelligence.
  • No Thumbnail Available
    Exploring Bimanual Tool-Based Interaction in a Drawing Environment
    (2004-05-28) Butler, Colin Grant; Robert A. St. Amant, Committee Chair; Dennis Bahler, Committee Member; R. Michael Young, Committee Member
    In this document, I will present HabilisDraw DT, a drawing environment in which bimanual direct manipulation and a strong tool-use metaphor are supported via the DiamondTouch input device from Mitsubishi Electronics Research Lab. The goal of this research is to explore the viability of the various contributions of HabilisDraw DT in the development of future interfaces. The principles upon which HabilisDraw DT have been built include persistent tools that embody intuitive aspects of their physical counterparts and an approach to interface learnability that capitalizes on the user's inherent ability to use tools both separately and in conjunction with other tools. In addition to these principles, HabilisDraw DT extends the physical-virtual tool correlation with bimanual input via the MERL DiamondTouch input device and a close adherence to the direct manipulation interaction model. This paper presents background work in novel interaction and an overview of the HabilisDraw interface, then explores the benefits of a desktop metaphor that closely mimics the behavior of tools and objects in a two-dimensional drawing environment and argues for the applicability of the system's fundamental principles for improving interface usability in the future.
  • No Thumbnail Available
    HabilisDraw: A Tool-based Direct Manipulation Software Environment
    (2004-07-26) Horton, Thomas Eugene; R. Michael Young, Committee Member; Dennis R. Bahler, Committee Member; Robert St. Amant, Committee Chair
    Direct manipulation interfaces already employ a weak analogy to the use of physical tools in the real world. Despite certain tradeoffs, a stronger application of tool-using principles can lead to improvements in the design of software interfaces. I outline here some of the theory behind such an approach, and describe the design of systems that follow these principles, with emphasis on a tool-based drawing application called HabilisDraw.
  • No Thumbnail Available
    An Inductive Framework for Affect Recognition and Expression in Interactive Learning Environments
    (2009-03-18) McQuiggan, Scott William; James C. Lester, Committee Chair; John L. Nietfeld, Committee Member; Munindar P. Singh, Committee Member; R. Michael Young, Committee Member
    Recent years have seen a growing recognition of the importance of affective reasoning in human-computer interaction. Because affect plays an important role in cognitive functions, such as perception and decision-making, the prospect of modeling user affect and enabling interactive systems to respond appropriately holds much appeal for a broad range of applications. Affective reasoning is particularly promising for educational applications because of the strong connections between affect and learning. If it were possible to accurately detect frustration, monitor changes in efficacy, and predict students’ affective states, interactive learning environments could more effectively tailor problem-solving episodes. However, constructing computational models of affect recognition and affect expression is challenging because of the need to devise solutions that are accurate, efficient, and capable of making early predictions. To this end we propose CARE, an inductive framework for affect recognition and expression. CARE learns models of affect from observation of human-computer and human-human interaction. First, in training sessions, users perform a series of tasks in interactive environments while CARE monitors reports of users’ affective experiences. In addition, CARE monitors user actions, world state, and physiological responses. Second, CARE induces models of affect from observed data with machine learning techniques that include decision trees, naive Bayes classifiers, support vector machines, Bayesian networks, and n-grams. Third, at runtime, CARE-induced models monitor user actions, world state, and physiological responses to predict user affective states. In a series of studies involving more than four hundred subjects, the CARE framework has successfully been used to perform a number of affect prediction tasks, including emotional state prediction, self-efficacy, and metacognitive monitoring prediction. It has also been used to induce models of empathy for virtual agents in interactive learning environments. Results suggest that CARE-induced affect models satisfy the real-time requirements of interactive systems and provide a solid foundation for empirically informed affective reasoning.
  • No Thumbnail Available
    Integrating Preference Elicitation into Visualizations
    (2007-05-29) Dennis, Brent Moorman; Jon Doyle, Committee Member; R. Michael Young, Committee Member; Christopher Healey, Committee Chair; Carla Savage, Committee Member
    Modern technology has enabled researchers to collect large amounts of information in an expanding scope of research fields. At the same time, these new datasets are becoming more complex as evidenced in their increasing size and dimensionality. Managing and understanding these datasets has become a challenging problem. Visualizations attempt to address these concerns by creating meaningful graphical representations of data that can rapidly and accurately convey important information and interesting properties about the data to a researcher. However, many existing visualization algorithms are overwhelmed by the size of today's datasets. As a result, information is often forced off-screen due to a lack of visual resources. In previous work, we developed a navigation assistant to aid users with finding interesting data elements located off-screen. The assistant used a graph framework to provide way-finding cues and generate informative animated tours of the visualization. In order to identify which elements to include in this framework, the navigation assistant needs to model users' interests; i.e., their preferences. The efficient collection and modeling of a user's preference information is a fundamental goal of preference elicitation. Many of these techniques have yet to be applied to real-world practical problems. We address the challenges of integrating a preference model and corresponding elicitation techniques into an environment not especially suited for collecting preference information, specifically, a visualization environment. Using combinations of explicit and implicit techniques, the navigation assistant collects preference information from users both before and during their interaction with a visualization. These techniques provide input to an underlying preference model used by the navigation assistant to dynamically add or remove elements from the graph framework. Using the preference model, the assistant attempts to create a description of a user's preferences, possibly revealing previously unknown interests.
  • No Thumbnail Available
    The Role of Cognitive and Metacognitive Tool Use in Narrative-Centered Learning Environments.
    (2010-07-01) Shores, Lucy; James Lester, Committee Chair; R. Michael Young, Committee Member; Heather Davis, Committee Member
  • No Thumbnail Available
    A Schematic Representation for Cognitive Tool-Using Agents
    (2009-05-13) Mu, Wei; Robert St. Amant, Committee Chair; Ronald P. Endicott, Committee Member; James C. Lester, Committee Member; R. Michael Young, Committee Member
    In artificial intelligence (AI) research, embodied systems have received increasing attention since the 1990s. How to bridge the gap between raw sensorimotor data and symbolic representation in a robotic agent is still an open question. The research described in this document is inspired by theories in cognitive science, such as concept theory and embodied realism, as well as work in robotics and AI. The general goal of this research is to build a system capable of acquiring and maintaining semantic knowledge for higher-level reasoning, in particular reasoning about the use of tools, from the embodied experience of a cognitive agent in a simulated environment or in the real world. This research addresses cognitive theories of embodiment, the design of a general computational architecture, and the design and implementation of AI techniques for solving tool-using problems. One of the major contributions of this research is to provide a computational architecture for an embodied agent that can capture semantic relations from its interactions with the world, sufficient to support effective tool use both in short-term predictions and plan generation. As a result, we have implemented an example of this architecture in an Action Schema Generator, or ASG, which can automatically generate production rules and symbolic representations from a simulated agent’s embodied experience without losing the capability of transferring the knowledge backwards to its original numerical sensorimotor format. We have developed pragmatic methods to evaluate the performance of ASG, at the component level and the system level, in simulated and real scenarios, for tasks with and without tools. We also have compared our design with other robotics and cognitive architectures, including behavior-based robotics, Neuroevolution, and psychologically inspired architectures. We believe that our work can provide a general foundation for embodied agents, and should be useful in future research.

Contact

D. H. Hill Jr. Library

2 Broughton Drive
Campus Box 7111
Raleigh, NC 27695-7111
(919) 515-3364

James B. Hunt Jr. Library

1070 Partners Way
Campus Box 7132
Raleigh, NC 27606-7132
(919) 515-7110

Libraries Administration

(919) 515-7188

NC State University Libraries

  • D. H. Hill Jr. Library
  • James B. Hunt Jr. Library
  • Design Library
  • Natural Resources Library
  • Veterinary Medicine Library
  • Accessibility at the Libraries
  • Accessibility at NC State University
  • Copyright
  • Jobs
  • Privacy Statement
  • Staff Confluence Login
  • Staff Drupal Login

Follow the Libraries

  • Facebook
  • Instagram
  • Twitter
  • Snapchat
  • LinkedIn
  • Vimeo
  • YouTube
  • YouTube Archive
  • Flickr
  • Libraries' news

ncsu libraries snapchat bitmoji

×