Welcome to the website of the second UncertaiNLP workshop to be held at EMNLP 2025 in Suzhou, China.
Tagline: UncertaiNLP brings together researchers embracing sources of uncertainty from human language and NLP tools; harnessing them for improved NLP.
Previous editions of UncertaiNLP: 2024.
Update 18/07: We have extended the submission deadlines and the subsequent review schedule by a week!
Important Dates
- First call for papers: June 6th, 2025
- Second call for papers: July 1st, 2025
- Third call for papers: August 1st, 2025
- Submission deadline:
August 8th, 2025August 15th, 2025 - Submission of already pre-reviewed ARR papers:
August 22nd, 2025August 29th, 2025 - Notification of acceptance:
September 10th, 2025September 17th, 2025 - Camera-ready papers due:
September 14th, 2025September 21st, 2025 - Workshop date: November 9th, 2025
All deadlines are 11:59pm UTC-12 (“anywhere on earth”).
Workshop Topic and Content
Human languages are inherently ambiguous and understanding language input is subject to interpretation and complex contextual dependencies. Nevertheless, the main body of research in NLP is still based on the assumption that ambiguities and other types of underspecification can and have to be resolved. This workshop will provide a platform for research that embraces variability in human language and aims to represent and evaluate the uncertainty that arises from it, and from modeling tools themselves.
Workshop Topics
UncertaiNLP welcomes submissions to topics related (but not limited) to:
- Formal tools for uncertainty representation
- Theoretical work on probability and its generalizations
- Symbolic representations of uncertainty
- Documenting sources of uncertainty
- Theoretical underpinnings of linguistic sources of variation
- Data collection (e.g., to document linguistic variability, multiple perspectives, etc.)
- Modeling
- Explicit representation of model uncertainty (e.g., parameter and/or hypothesis uncertainty, Bayesian NNs in NLU/NLG, verbalised uncertainty, feature density, external calibration modules)
- Disentangled representation of different sources of uncertainty (e.g., hierarchical models, prompting)
- Reducing uncertainty due to additional context (e.g. clarification questions, retrieval/API augmented models)
- Learning (or parameter estimation)
- Learning from single and/or multiple references
- Gradient estimation in latent variable models
- Probabilistic inference
- Theoretical and applied work on approximate inference (e.g., variational inference, Langevin dynamics)
- Unbiased and asymptotically unbiased sampling algorithms
- Decision making
- Utility-aware decoders and controllable generation
- Selective prediction
- Active learning
- Evaluation
- Statistical evaluation of language models
- Calibration to interpretable notions of uncertainty (e.g., calibration error, conformal prediction)
- Evaluation of epistemic uncertainty
- Hallucinations
- Theoretical and empirical study of hallucination phenomena in NLU/NLG
- Describing, formalising, categorising hallucination phenomena
- Methods for detecting and quantifying hallucinations
- Mitigation techniques including uncertainty-aware generation, retrieval-augmented methods, and controllable generation
- Relationship between specific kinds (or sources) of uncertainty and hallucination occurrence
![]() |
![]() |
![]() |
Gal Yona is a Research Scientist at Google Research, Tel Aviv, where she is working on improving factuality in large language models, with an emphasis on robustness and uncertainty. Before joining Google, Gal completed her PhD in Computer Science at the Weizmann Institute of Science, developing definitions and algorithms for preventing discrimination in machine learning models. Gal received numerous award during her PhD, including the Google PhD Fellowship in Machine Learning (2021).
Maxim Panov is an Assistant Professor at MBZUAI, UAE. Before joining MBZUAI, Panov worked as a research scientist at DATADVANCE Company, where he participated in developing a library of data analysis methods for engineering applications. This library, pSeven, is now used by many companies worldwide, including Airbus, Porsche, Mitsubishi, Toyota, and Limagrain. From 2018, Panov has been an assistant professor at Skolkovo Institute of Science and Technology, Moscow, where he led a statistical machine learning group. Since 2022, he has led an AI theory and algorithms group at the Technology Innovation Institute, Abu Dhabi, UAE. His research interests lie in uncertainty quantification for machine learning model predictions and Bayesian approaches in machine learning. Maxim is leading a research team dedicated to exploring the theoretical foundations of uncertainty quantification and its practical applications. Maxim is also co-leading the development of the LM-Polygraph framework for uncertainty quantification for LLMs. Maxim was a local chair for the ICDM 2024 conference and a recipient of the Best Paper Runner-up Award at the Uncertainty in Artificial Intelligence 2023 conference.
Eyke Hüllermeier heads the Chair of Artificial Intelligence and Machine Learning at LMU Munich. His research interests are centered around methods and theoretical foundations of artificial intelligence, with a specific focus on machine learning and reasoning under uncertainty. He has published more than 300 articles on these topics in top-tier journals and major international conferences, and several of his contributions have been recognized with scientific awards.
Call for Papers
Authors are invited to submit by August 15th original and unpublished research papers in the following categories:
- Full papers (up to 8 pages) for substantial contributions.
- Short papers (up to 4 pages) for ongoing or preliminary work.
All submissions must be in PDF format, submitted electronically via OpenReview and should follow the EMNLP 2025 formatting guidelines (following the ARR CfP: use the official ACL style templates, which are available here).
We now accept submissions with already existing ACL Rolling Reviews (ARR) via OpenReview, with the deadline August 29th AoE. These submissions must have been reviewed by ARR before, which will be used in our evaluation, and which must be linked to our system through the paper link field available in the OpenReview form. Please make sure to also follow the EMNLP 2025 formatting guidelines (following the ARR CfP: use the official ACL style templates, which are available here).
All submissions are archival, but we also invite authors of papers accepted to Findings to reach out to the organizing committee of UncertaiNLP to present their papers at the workshop, if in line with the topics described above.
Camera-ready versions for accepted archival papers should be uploaded to the submission system by the camera-ready deadline. Authors may use up to one (1) additional page to address reviewer comments.
Call for Papers is available here.
Program Committee
- Luigi Acerbi (University of Helsinki)
- Roee Aharoni (Google Research)
- Alessandro Antonucci (IDSIA)
- Henri Aïdasso (Université du Québec)
- Samuel Barry (Mistral AI)
- Nitay Calderon (Technion)
- Juan Cardenas-Cartagena (University of Groningen)
- Arie Cattan (Bar Ilan University)
- Julius Cheng (University of Cambridge)
- Ye-eun Cho (Sungkyunkwan University)
- Caio Corro (INSA Rennes)
- Nico Daheim (Technische Universität Darmstadt)
- Sarkar Snigdha Sarathi Das (Pennsylvania State University)
- Vivek Datla (Capital One)
- Bonaventure F. P. Dossou (McGill University)
- Adam Faulkner (Capital One)
- Pedro Lobato Ferreira (University of Amsterdam)
- Antske Fokkens (VU University Amsterdam)
- Jes Frellsen (Technical University of Denmark)
- Thomas L. Griffiths (Princeton University)
- Georg Groh (Technical University Munich)
- Christian Hardmeier (IT University Copenhagen)
- Zhiqi Huang (CapitalOne)
- Evgenia Ilia (University of Amsterdam)
- Yuu Jinnai (CyberAgent, Inc.)
- Mucheol Kim (Chung-Ang University)
- Deepak Kumar (Infrrd)
- Lucie Kunitomo-Jacquin (AIST)
- Haau-Sing Li (Technische Universität Darmstadt)
- Edison Marrese-Taylor (University of Tokyo)
- Yan Meng (University of Amsterdam)
- Timothee Mickus (University of Helsinki)
- Tatiana Passali (Aristotle University of Thessaloniki)
- Laura Perez-Beltrachini (University of Edinburgh)
- Yuval Pinter (Ben-Gurion University of the Negev)
- Timothy Pistotti (University of Auckland)
- Alberto Purpura (CapitalOne)
- Julian Rodemann (LMU Munich)
- Rico Sennrich (University of Zürich)
- Arnab Sharma (Paderborn University)
- Anthony Sicilia (Northeastern University)
- Edwin Simpson (University of Bristol)
- Maciej Skorski (University of Warsaw)
- Sharmin Sultana (University of Massachusetts at Lowell)
- Aarne Talman (University of Helsinki)
- Sergey Troshin (University of Amsterdam)
- Grigorios Tsoumakas (Aristotle University of Thessaloniki)
- Dennis Ulmer (University of Amsterdam)
- Teemu Vahtola (University of Helsinki)
- Matias Valdenegro-Toro (University of Groningen)
- Sami Virpioja (University of Helsinki)
- Daniel Vollmers (Paderborn University)
- Ryan Wails (Georgetown University)
- Li Wang (CapitalOne)
- Di Wu (University of Amsterdam)
- Yusen Zhang (Pennsylvania State University)
Contact
You can contact the organizers by email to uncertainlp@googlegroups.com.
Sponsors
We would like to thank UTTER, CRAI and Google (via a Google research scholar award) for their support of this workshop.
![]() |
![]() |
![]() |
Anti-Harassment Policy
UncertaiNLP workshop adheres to the ACL’s code of ethics, ACL’s anti-harassment policy , and ACL’s code of conduct.
Image Credits
Images were created using text-to-image model supplied via getimg.ai/, using the CreativeML Open Rail-M license.