-
Human-AI Interaction for Augmented Reasoning
Improving Human Reflective and Critical Thinking with Artificial Intelligence
April 26, 2025 | 9:00-17:00 JST | CHI 2025, Yokohama, Japan
AI-Augmented Reasoning systems are cognitive assistants that support human reasoning by providing AI-based feedback that can help users improve their critical reasoning skills. Made possible with new techniques like argumentation mining, fact-checking, crowdsourcing, attention nudging, and large language models, AI augmented reasoning systems can provide real-time feedback on logical reasoning, help users identify and avoid flawed arguments and
misinformation, suggest counter-arguments, provide evidence-based explanations, and foster deeper reflection.
The goal of this workshop is to bring together researchers from AI, HCI, cognitive and social science to discuss recent advances in AI-augmented reasoning, to identify open problems in this area, and to cultivate an emerging community on this important topic.
-
What is AI Augmented Reasoning?
AI Augmented Reasoning refers to the application of artificial intelligence (AI) systems to support, facilitate and improve human reasoning capabilities by providing insights, identifying patterns, uncovering biases, and offering guidance that is intuitive to the user and which enables them to make better-informed decisions that they feel that they arrived at through their own thinking processes.
AI-enhanced reasoning systems differ from other AI information-processing systems in that they focus not only on providing accurate information or optimize decision outcomes but also on actively engaging users in reflective thinking or building strong, appropriate intuitions.Workshop Schedule
Opening & Welcome 9:00 AM Keynote Adress 9:15 AM Break 10:15 AM Panel Discussion 10:30 AM Paper Session 1 11:15 AM Lunch Break 12:00 PM Group Activity 1 1:00 PM Break 2:00 PM Paper Session 2 2:15 PM Group Activity 2 3:30 PM Closing Remarks 4:30 PM Keynote Speakers
Dr. Thomas Costello
Thomas Costello is an Assistant Professor of Psychology at American University and Research Associate at the MIT Sloan School of Management. He studies where political and social beliefs come from, how they differ from person to person–and, ultimately, why they change–using the tools of personality, cognitive, clinical, and political science. He is best known for his work on (a) leveraging artificial intelligence to reduce conspiracy theory beliefs and (b) the psychology of authoritarianism. He has published dozens of research papers in peer-reviewed outlets, including Science, Journal of Personality and Social Psychology, Psychological Bulletin, and Trends in Cognitive Sciences. Thomas developed DebunkBot.com, a public tool for combatting conspiracy theories with AI.
Dr. Paolo Torroni
Dr. Paolo Torroni has been an associate professor at the University of Bologna since 2015. His primary research focuses on artificial intelligence, particularly in natural language processing, multi-agent systems, and computational logics. He authored over 180 scientific publications. He is the head of the Language Technologies Lab and past director of the Master’s Degree in Artificial Intelligence in Bologna and a visiting fellow at the European University Institute in Florence.
-
Call for Participation
If you are interested in the workshop, please submit your application below for virtual or in-person participation. The submission deadline is March 2nd, 2025. We will notify the accepted participants by March 24th 2025. The list of participants will be posted on the workshop website.
At least one author of each accepted position paper must attend the workshop and all participants must register for at least one day of the conference. We will host accepted papers on the workshop’s website for participants and others to review prior to the meeting.
Registration Deadline: March 2, 2025
People interested in participating can apply by completing the form below indicating their disciplinary background and the nature of their interest in the topic, including:
- A link to a relevant design artifact
- Short Bio: Explain the type of background and/or expertise that you would bring to this workshop. Please include information about your role (e.g., student, faculty, industry, non-profit, etc.), your discipline (e.g., HCI, AI, ethics, philosophy, law, religion, etc.), and your scholarly or professional experience with topics relating to the theme “AI and the Afterlife.” (250 word limit)
- Workshop Goals: A statement about your motivation for participating in this workshop, the issue(s) on which you are most focused, and what you hope to gain from the experience. (500 word limit)
- Workshop Contribution: A short statement about how you expect to contribute to the workshop and enhance the experience for other attendees. (500 word limit)
Optional:
- A position paper of up to ten pages (plus references) in the ACM single-column format. We plan to publish proceedings.
Workshop Goals
The workshop’s primary goals are:
- Share State-of-the-Art Research: Present and discuss recent advances in AI-augmented reasoning and human-AI interaction. This includes new techniques, tools, and methodologies developed to enhance critical and reflective thinking.
- Identify Challenges and Opportunities:
Highlight the current challenges, limitations, and potential risks associated with AI-augmented reasoning systems. Discuss opportunities for future research and development. - Interdisciplinary Collaboration:
Foster collaboration between researchers from AI, HCI, cognitive science, social science, and other relevant fields to create a multidisciplinary approach to developing AI-augmented reasoning systems. - Ethical and Social Implications:
Delve into the ethical and social issues surrounding AI-augmented reasoning like accidental overreliance, and potential misuses of modeling user reasoning. - Design Principles and Guidelines:
Develop design principles and guidelines for creating AI-augmented reasoning systems that prioritize human agency, autonomy, and long-term learning. Consider the balance between AI assistance and human decision-making. - Evaluation Methods:
Discuss and propose effective methods for evaluating the effectiveness of AI-augmented reasoning systems. This includes metrics for assessing improvement in critical thinking skills and the impact on decision-making quality.
Topics
Topics of interest include but are not limited to:
- AI-based reasoning interventions and critical thinking support
- Studies on misinformation related to AI and mitigation
- Political/Democratic reasoning
- Argument mining and argument synthesis
- Fact-checking, attention nudging and information validation
- Human-computer interaction (HCI) methods that boost reasoning
- User modeling and information delivery
- Human-AI Interaction methods for critical thinking
- Wearable systems for cognitive support
- Cognitive theories of reflection and intuitive decision making
Workshop Organizers
Valdemar Danry
MIT Media Lab, United States
Pat Pataranutaporn
MIT Media Lab, United States
Christopher Cui
University of California, San Diego, United StatesJui-Tse (Ray) Hung
Georgia Institute of Technology, United StatesLancelot Blanchard
MIT Media Lab, United States
Zana Buçinca
Harvard University, United StatesChenhao Tan
University of Chicago, United StatesThad Starner
Georgia Institute of Technology, United States
Pattie Maes
MIT Media Lab, United StatesKey Dates
Position paper submission deadline:
February 7th, 2025Sunday, March 2nd 2025 AoENotification of acceptance: Monday, March 24th 2025
Workshop date: Saturday, April 26th 2025