|
Uphill Battles in Language Processing
|
Early researchers in Natural Language Processing had lofty goals, including getting computers to understand stories, engage in natural, cooperative dialogues with people, and translate text and speech fluently and accurately from one human language to another. While there were significant early achievements (including systems such as SHRDLU, LUNAR and COOP), the knowledge they were based on and the techniques they employed could not be scaled up for practical use.
While much of what early researchers set out to achieve has been either forgotten or side-lined in favor of what can be done by exploiting large data sets and processing power, its potential value has not gone away: There is much to be gained from recognizing not just what was said, but why; from identifying conclusions people naturally draw from both what has been said and hasn't; and from representing domains in a sufficiently rich way to reduce reliance on what is explicit in a text. As such, we believe there can be a broad and positive impact of reviving early aspirations in the current context of large data sets, "deep" and probabilistic methods, and (especially) methods that aim to combine the capabilities of logical and data-driven conclusions.
This workshop will remind the community of early goals and attempts to achieve them, the uphill battles that remain, and hopefully revive them in a context in which much more can be done. The workshop will be split into four sessions, covering different areas and challenges. Each session will start with four short (10 minute) presentations, two by established researchers who carried out early work in the area, and two by more junior researchers who are known for their work on specific problems in the area. Each session will have a moderator, who will then invite the speakers and audience to discuss open questions. The four sessions and the invited speakers are listed below.
The workshop will also feature a poster session for students and researchers to present and discuss their current work. A short description of each poster will appear in the workshop proceedings.
9:00 | Text Understanding Invited talks, followed by discussion Chairs: Annie Louis, Michael Roth |
Reading between the Lines [abstract] [slides] Hal Daume III | |
Opportunities and Challenges for a Bayesian Approach to Language Processing [abstract] [slides] Andrew Kehler | |
A (maybe not yet) Unified Theory of Inference for Text Understanding [abstract] [slides] Chris Manning | |
Drawing Inferences [abstract] [slides] Marie-Catherine de Marneffe |
|
10:20 | Poster Boasters I |
An Analysis of Prerequisite Skills for Reading Comprehension Saku Sugawara, Akiko Aizawa | |
Bridging the gap between computable and expressive event representations in Social Media Darina Benikova, Torsten Zesch | |
Statistical Script Learning with Recurrent Neural Networks Karl Pichotta, Raymond Mooney | |
Moving away from semantic overfitting in disambiguation datasets Marten Postma, Filip Ilievski, Piek Vossen, Marieke van Erp | |
Unsupervised Event Coreference for Abstract Words Dheeraj Rajagopal, Eduard Hovy, Teruko Mitamura | |
Towards Broad-coverage Meaning Representation: The Case of Comparison Structures Omid Bakhshandeh, James Allen | |
10:30 | Coffee break | 11:00 | Natural Language Generation Invited talks, followed by discussion Chair: Michael White |
Neural Natural Language Generation [abstract] [slides] Ioannis Konstas | |
Uphill Battles in Language Generation: Sentence Planning and Lexical Choice [slides] Kathleen McKeown | |
Strong Baselines, Evaluation, and the Role of Humans in Grounded Language Generation [abstract] [slides] Margaret Mitchell | |
An Uphill Battle: Achieving Pragmatic Congruency in Multilingual Texts [abstract] [slides] Donia Scott |
|
12:20 | Poster Boasters II |
DialPort: A General Framework for Aggregating Dialog Systems Tiancheng Zhao, Kyusong Lee, Maxine Eskenazi | |
C2D2E2: Using Call Centers to Motivate the Use of Dialog and Diarization in Entity Extraction Ken Church, Weizhong Zhu, Jason Pelecanos | |
Visualizing the Content of a Children's Story in a Virtual World: Lessons Learned Quynh Ngoc Thi Do, Steven Bethard, Marie-Francine Moens | |
Stylistic Transfer in Natural Language Generation Systems Using Recurrent Neural Networks Jad Kabbara and Jackie Chi Kit Cheung | |
Using Language Groundings for Context-Sensitive Text Prediction Timothy Lewis, Cynthia Matuszek, Amy Hurst, Matthew Taylor | |
Towards a continuous modeling of natural language domains Sebastian Ruder, Parsa Ghaffari, John G. Breslin | |
12:30 | Lunch break |
14:00 | Dialogue and Speech Invited talks, followed by discussion Chair: Bonnie Webber |
Toward Fluid Conversational Interaction in Spoken Dialogue Systems [abstract] [slides] David DeVault | |
Three Steps towards Real Artificial Speech Communication [abstract] [slides] Mark Liberman | |
Cooperation in Dialogue Systems from Eliza to Alexa [abstract] [slides] Diane Litman | |
Uphill Battles in Task Modeling and Grounding for Dialog [abstract] [slides] Amanda Stent |
|
15:20 | Poster Session including coffee break |
16:30 | Grounded Language Invited talks, followed by discussion Chair: Luke Zettlemoyer |
Uphill Battles: Language Grounded in Reasoning Agents [abstract] [slides] James Allen | |
Language Grounding towards Situated Human-Robot Communication [abstract] [slides] Joyce Chai | |
Knowledge about the World [abstract] [slides] Yejin Choi | |
Grounding Computational Linguistics in AI Planning [abstract] [slides] Mark Steedman |
|
17:50 | Closing |