Person:
Ying, Lance

Loading...
Profile Picture

Email Address

AA Acceptance Date

Birth Date

Research Projects

Organizational Units

Job Title

Last Name

Ying

First Name

Lance

Name

Ying, Lance

Search Results

Now showing 1 - 2 of 2
  • Publication
    The Neuro-Symbolic Inverse Planning Engine (NIPE): Modeling Probabilistic Social Inferences From Linguistic Inputs
    (2023-06-27) Ying, Lance; Collins, Katherine M.; Wei, Megan; Zhang, Cedegao E.; Zhi-Xuan, Tan; Weller, Adrian; Tenenbaum, Joshua B.; Wong, Lionel; Collins, Katherine
    Human beings are social creatures. We routinely reason about other agents, and a crucial component of this social reasoning is inferring people’s goals as we learn about their actions. In many settings, we can perform intuitive but reliable goal inference from language descriptions of agents, actions, and the background environments. In this paper, we study this process of language driving and influencing social reasoning in a probabilistic goal inference domain. We propose a neurosymbolic model that carries out goal inference from linguistic inputs of agent scenarios. The “neuro” part is a large language model (LLM) that translates language descriptions to code representations, and the “symbolic” part is a Bayesian inverse planning engine. To test our model, we design and run a human experiment on a linguistic goal inference task. Our model closely matches human response patterns and better predicts human judgements than using an LLM alone.
  • Publication
    Communicating Common Goal Knowledge Improves Trust-Calibration in Human-AI Collaboration
    (2024-05) Ying, Lance; Gajos, Krzysztof
    In Human-AI collaboration, human agents often have a clear goal in mind and the AI Assistant tries to help users achieve their goals more efficiently. However, inferring users’ goals is non-trivial based on noisy user behavior and there is often mismatch in agent’s belief about each other’s knowledge of the ground-truth goal, leading to coordination failure. In this study, we propose that building common goal knowledge through communication improves human user’s mental model of the AI Assistant and leads to more efficient and effective human-AI collaboration. To test this hypothesis, we design an experiment where an AI assistant helps a human user shop for recipes on a grocery platform. We compare the user behavior and team performance under three experimental conditions: AI providing no information over its knowledge over human goals, AI expressing its belief over human’s goal through verbal communication(“Show”) and AI indicating its confidence in its belief over human’s goal (“Tell”). We find that communicating goal knowledge (in “Show” and “Tell”) increases user’s tendency to use AI when AI is indeed correct and improves user’s subjective ratings of the AI assistant.