Welcome to the website of the second UncertaiNLP workshop to be held at EMNLP 2025 in Suzhou, China.
Tagline: UncertaiNLP brings together researchers embracing sources of uncertainty from human language and NLP tools; harnessing them for improved NLP.
Previous editions of UncertaiNLP: 2024.
Important Dates
- First call for papers: TBD
- Second call for papers: TBD
- Third call for papers: TBD
- Submission deadline: TBD
- Submission of already pre-reviewed ARR papers: TBD
- Notification of acceptance: TBD
- Camera-ready papers due: TBD
- Workshop date: TBD
Workshop Topic and Content
Human languages are inherently ambiguous and understanding language input is subject to interpretation and complex contextual dependencies. Nevertheless, the main body of research in NLP is still based on the assumption that ambiguities and other types of underspecification can and have to be resolved. This workshop will provide a platform for research that embraces variability in human language and aims to represent and evaluate the uncertainty that arises from it, and from modeling tools themselves.
Workshop Topics
UncertaiNLP welcomes submissions to topics related (but not limited) to:
- Formal tools for uncertainty representation
- Theoretical work on probability and its generalizations
- Symbolic representations of uncertainty
- Documenting sources of uncertainty
- Theoretical underpinnings of linguistic sources of variation
- Data collection (e.g., to to document linguistic variability, multiple perspectives, etc.)
- Modeling
- Explicit representation of model uncertainty (e.g., parameter and/or hypothesis uncertainty, Bayesian NNs in NLU/NLG, verbalised uncertainty, feature density, external calibration modules)
- Disentangled representation of different sources of uncertainty (e.g., hierarchical models, prompting)
- Reducing uncertainty due to additional context (e.g. clarification questions, retrieval/API augmented models)
- Learning (or parameter estimation)
- Learning from single and/or multiple references
- Gradient estimation in latent variable models
- Probabilistic inference
- Theoretical and applied work on approximate inference (e.g., variational inference, Langevin dynamics)
- Unbiased and asymptotically unbiased sampling algorithms
- Decision making
- Utility-aware decoders and controllable generation
- Selective prediction
- Active learning
- Evaluation
- Statistical evaluation of language models
- Calibration to interpretable notions of uncertainty (e.g., calibration error, conformal prediction)
- Evaluation of epistemic uncertainty
![]() |
![]() |
![]() |
Program Committee
- Luigi Acerbi (University of Helsinki, FI)
- Roee Aharoni (Google Research, IL)
- Alexandra Bodrova (Princeton University, US)
- Margarida M. Campos (Instituto de Telecomunicações, Instituto Superior Técnico, PT)
- Julius Cheng (University of Cambridge, UK)
- Caio Corro (INSA Rennes, FR)
- Nico Daheim (Technische Universität Darmstadt, DE)
- António Farinhas (Instituto Superior Técnico, PT)
- Raquel Fernandez (University of Amsterdam, NL)
- Jes Frellsen (Technical University of Denmark, DK)
- Taisiya Glushkova (Instituto Superior Técnico, PT)
- Christian Hardmeier (IT University Copenhagen, DK)
- Evgenia Ilia (University of Amsterdam, NL)
- Yuu Jinnai (CyberAgent, Inc., JP)
- Haau-Sing Li (Technische Universität Darmstadt, DE)
- Timothee Mickus (University of Helsinki, FI)
- Natalie Schluter (Technical University of Denmark, DK)
- Philip Schulz (Amazon, AU)
- Sebastian Schuster (University College London, University of London, UK)
- Rico Sennrich (University of Zürich, CH)
- Anthony Sicilia (Northeastern University, US)
- Edwin Simpson (University of Bristol, UK)
- Aman Sinha (University of Lorraine, FR)
- Arno Solin (Aalto University, FI)
- Dharmesh Tailor (University of Amsterdam, NL)
- Aarne Talman (University of Helsinki, FI)
- Ivan Titov (University of Edinburgh, UK)
- Dennis Ulmer (IT University Copenhagen, Technical University of Denmark (DTU), DK)
- Teemu Vahtola (University of Helsinki, FI)
- Sami Virpioja (University of Helsinki, FI)
- Andreas Vlachos (University of Cambridge, UK)
- Yuxia Wang (Mohamed bin Zayed University of Artificial Intelligence, AE)
Contact
You can contact the organizers by email to uncertainlp@googlegroups.com.
Anti-Harassment Policy
UncertaiNLP workshop adheres to the ACL’s code of ethics, ACL’s anti-harassment policy , and ACL’s code of conduct.
Image Credits
Images were created using text-to-image model supplied via getimg.ai/, using the CreativeML Open Rail-M license.