top of page

Project Papers & Lectures

Event Structure and State Source Dependence in Events Data
Construction
David Sylvan, Ashley Thornton, Jean-Louis Arcand

119th Annual Meeting of the American Political Scien
ce Association
Los Angeles, California; 31 August–3 September 2023

A wide variety of tasks have been framed as text-to-text tasks to allow processing by sequence-to-sequence models. We propose a new task of generating a semi-structured interpretation of a source document. The interpretation is semi-structured in that it contains mandatory and optional fields with free-text information. This structure is surfaced by human annotations, which we standardize and convert to text format. We then propose an evaluation technique that is generally applicable to any such semi-structured annotation, called equivalence classes evaluation. The evaluation technique is efficient and scalable; it creates a large number of evaluation instances from a comparably cheap clustering of the free-text information by domain experts. For our task, we release a dataset about the monetary policy of the Federal Reserve. On this corpus, our evaluation shows larger differences between pretrained models than standard text generation metrics.

Proceedings of the 2nd Workshop on Natural Language Generation, Evaluation, and Metrics (GEM) (2022): 262–275. 

Conference website

Paper

Modeling Journalistic Framing of State Announcements: Using NLP to Generate Characterizations of Acts
David Sylvan, Jean-Louis Arcand, Ashley Thornton

12th Annual Meeting o
f the European Political Science Association
Prague; 23–25 June 2022
Foreign Policy and Journalistic Assumptions: Incorporating Background Semantics Into Machine Learning Models of Event Interpretation
David Sylvan, Jean-Louis Arcand, Ashley Thornton

63rd Annual Convention of the International Studies Association
Nashville, Tennessee; 28 March–2 April 2022
A Hybrid Text Analysis Method for Comparing Contextual Specificity in Political Documents
David Sylvan, Jean-Louis Arcand, Ashley Thornton

Annual Meeting of the American Political Science Association
Seattle, Washington; 30 September–3 October 2021
Modeling the Interpretation of Policy Announcements: Russia, the Federal Reserve, and the New York Times
David Sylvan, Jean-Louis Arcand, Ashley Thornton

11th Annual Meeting of the European Political Science Association
Virtual conference; 24 & 
25 June 2021
Russia, the Federal Reserve, and the New York Times:
Machine-Learning Models of How Elites Interpret Policy
Announcements
David Sylvan, Jean-Louis Arcand, Ashley Thornton

62nd Annual Convention of the International Studies Association
Virtual conference; 6–9 April 202
1
Automated Interpretation of Political and Economic Policy Documents: Machine Learning Using Semantic and Syntactic Information
David Sylvan, Jean-Louis Arcand

IV ISA Forum of Sociology
Porto Alegre, B
razil; 23–28 February 2021
Automated Interpretation of Political and Economic Policy Documents: Machine Learning Using Semantic and Syntactic Information
Jean-Louis Arcand

International Sociological Association Conference on "Language and Society: Research Advances in Social Sciences"
Warsaw, Poland; 26 & 27 September 2019
Related papers & presentations
Unsupervised Token-level Hallucination Detection from Summary Generation By-products (2022)
Andreas Marfurt, James Henderson

Hallucinations in abstractive summarization are model generations that are unfaithful to the source document. Current methods for detecting hallucinations operate mostly on noun phrases and named entities, and
restrict themselves to the XSum dataset, which is known to have hallucinations in 3 out of 4 training examples (Maynez et al., 2020). We instead consider the CNN/DailyMail dataset where the summarization model has not seen abnormally many hallucinations during training. We automatically detect candidate hallucinations at the token level, irrespective of its part of speech. Our detection comes essentially for free, as we only use information the model already produces during generation of the summary. This enables practitioners to jointly
generate a summary and identify possible hallucinations, with minimal overhead. We repurpose an existing factuality dataset and create our own token-level annotations. The evaluation on these two datasets shows that
our model achieves better precision-recall tradeoffs than its competitors, which additionally require a model forward pass. 

Proceedings of the 2nd Workshop on Natural Language Generation, Evaluation, and Metrics (GEM) (2022): 248–261. 

Conference website

Paper

Sentence-level Planning for Especially Abstractive Summarization (2021)
Andreas Marfurt, James Henderson

Abstractive summarization models heavily rely on copy mechanisms, such as the pointer network or attention, to achieve good performance, measured by textual overlap with reference summaries. As a result, the generated summaries stay close to the formulations in the source document. We propose the *sentence planner* model to generate more abstractive summaries. It includes a hierarchical decoder that first generates a representation for the next summary sentence, and then conditions the word generator on this representation. Our generated summaries are more abstractive and at the same time achieve high ROUGE
scores when compared to human reference summaries. We verify the effectiveness of our design decisions with extensive evaluations.

Presented at the Third Workshop on New Frontiers in Summarization, EMNLP 2021
Paper
Presentation
Code

Syntax-Aware Graph-to-Graph Transformer for Semantic Role Labelling (2021)
Alireza Mohammadshahi, James Henderson

The goal of semantic role labelling (SRL) is to recognise the predicate-argument structure of a sentence. Recent models have shown that syntactic information can enhance the SRL performance, but other syntax-agnostic approaches achieved reasonable performance. The best way to encode syntactic information for the SRL task is still an open question. In this paper, we propose the Syntax-aware Graph-to-Graph Transformer (SynG2G-Tr) architecture, which encodes the syntactic structure with a novel way to input graph relations as embeddings directly into the self-attention mechanism of Transformer. This approach adds a soft bias towards attention patterns that follow the syntactic structure but also allows the model to use this information to learn alternative patterns. We evaluate our model on both dependency-based and span-based SRL datasets, and outperform all previous syntax-aware and syntax-agnostic models in both in-domain and out-of-domain settings, on the CoNLL 2005 and CoNLL 2009 datasets. Our architecture is general and can be applied to encode any graph information for a desired downstream task.

Recursive Non-Autoregressive Graph-to-Graph Transformer for
Dependency Parsing with Iterative Refinement (2021)
Alireza Mohammadshahi, James Henderson

We propose the Recursive Non-autoregressive Graph-to-Graph Transformer architecture (RNGTr) for the iterative refinement of arbitrary graphs through the recursive application of a non-autoregressive Graph-to-Graph Transformer and apply it to syntactic dependency parsing. We demonstrate the power and effectiveness of RNGTr on several dependency corpora, using a refinement model pre-trained with BERT. We also introduce Syntactic Transformer (SynTr), a non-recursive parser similar to our refinement model. RNGTr can improve the accuracy of a variety of initial parsers on 13 languages from the Universal Dependencies Treebanks, English and Chinese Penn Treebanks, and the German CoNLL2009 corpus, even improving over the new state-of-the-art results achieved by SynTr, significantly improving the state-of-the-art for all corpora tested.

Transactions of the Association for Computational Linguistics (2021) 9: 120–138. 

Conference website

Paper

Code

Graph-to-Graph Transformer for Transition-based Dependency
Parsing (2020)
Alireza Mohammadshahi, James Henderson

We propose the Graph2Graph Transformer architecture for conditioning on and predicting arbitrary graphs, and apply it to the challenging task of transition-based dependency parsing. After proposing two novel Transformer models of transition-based dependency parsing as strong baselines, we show that adding the proposed mechanisms for conditioning on and predicting graphs of Graph2Graph Transformer results
in significant improvements, both with and without BERT pre-training. The novel baselines and their integration with Graph2Graph Transformer significantly outperform the state-of-the-art in traditional transition-based dependency parsing on both English Penn Treebank, and 13 languages of Universal Dependencies Treebanks. Graph2Graph Transformer can be integrated with many previous structured prediction methods, making it easy to apply to a wide range of NLP tasks.

GILE: A Generalized Input-Label Embedding for Text Classification (2019)
Nikolaos Pappas, James Henderson

Neural text classification models typically treat output labels as categorical variables that lack description and semantics. This forces their parametrization to be dependent on the label set size, and, hence, they are unable to scale to large label sets and generalize to unseen ones. Existing joint input-label text models overcome these issues by exploiting label descriptions, but they are unable to capture complex label relationships, have rigid parametrization, and their gains on unseen labels happen often at the expense of weak performance on the labels seen during training. In this paper, we propose a new input-label model that generalizes over previous such models, addresses their limitations, and does not compromise performance on seen labels. The model consists of a joint nonlinear input-label embedding with controllable capacity and a joint-space-dependent classification unit that is trained with cross-entropy loss to optimize classification performance. We evaluate models on full-resource and low- or zero-resource text classification of multilingual news and biomedical text with a large label set. Our model outperforms monolingual and multilingual models that do not leverage label semantics and previous joint input-label space models in both scenarios.

Transactions of the Association for Computational Linguistics (2019) 7: 139155.

Paper

bottom of page