Project Papers & Lectures

Foreign Policy and Journalistic Assumptions: Incorporating Background Semantics Into Machine Learning Models of Event Interpretation
David Sylvan, Jean-Louis Arcand, Ashley Thornton

63rd Annual Convention of the International Studies Association, 30 March–2 April 2022
A Hybrid Text Analysis Method for Comparing Contextual Specificity in Political Documents
David Sylvan, Jean-Louis Arcand, Ashley Thornton

Annual Meeting of the American Political Science Association, 30 September–3 October 2021
Modeling the Interpretation of Policy Announcements: Russia, the Federal Reserve, and the New York Times
David Sylvan, Jean-Louis Arcand, Ashley Thornton

11th Annual Meeting of the European Political Science Association, 24–25 June 2021
Russia, the Federal Reserve, and the New York Times:
Machine-Learning Models of How Elites Interpret Policy
Announcements
David Sylvan, Jean-Louis Arcand, Ashley Thornton

62nd Annual Convention of the International Studies Association, 6–9 April 2021
Automated Interpretation of Political and Economic Policy Documents: Machine Learning Using Semantic and Syntactic Information
David Sylvan, Jean-Louis Arcand

IV ISA Forum of Sociology
Porto Alegre, Brazil; 23–28 February 2021

https://www.isa-sociology.org/en/conferences/forum/porto-alegre-2021
Automated Interpretation of Political and Economic Policy Documents: Machine Learning Using Semantic and Syntactic Information
Jean-Louis Arcand
International Sociological Association Conference on "Language and Society: Research Advances in Social Sciences"
Warsaw, Poland; 26 & 27 September 2019
https://www.language-and-society.org/conferences/language-and-society-research-advances-in-social-sciences-international-conference/
Related papers & presentations
Sentence-level Planning for Especially Abstractive Summarization (2021)
Andreas Marfurt, James Henderson

Abstractive summarization models heavily rely on copy mechanisms, such as the pointer network or attention, to achieve good performance, measured by textual overlap with reference summaries. As a result, the generated summaries stay close to the formulations in the source document. We propose the *sentence planner* model to generate more abstractive summaries. It includes a hierarchical decoder that first generates a representation for the next summary sentence, and then conditions the word generator on this representation. Our generated summaries are more abstractive and at the same time achieve high ROUGE
scores when compared to human reference summaries. We verify the effectiveness of our design decisions with extensive evaluations.

Presented at the Third Workshop on New Frontiers in Summarization, EMNLP 2021
Paper: 
https://aclanthology.org/2021.newsum-1.1/
Presentation: https://screencast-o-matic.com/watch/cr6XqEVXO5n
Code: https://github.com/idiap/sentence-planner

Syntax-Aware Graph-to-Graph Transformer for Semantic Role Labelling (2021)
Alireza Mohammadshahi, James Henderson

The goal of semantic role labelling (SRL) is to recognise the predicate-argument structure of a sentence. Recent models have shown that syntactic information can enhance the SRL performance, but other syntax-agnostic approaches achieved reasonable performance. The best way to encode syntactic information for the SRL task is still an open question. In this paper, we propose the Syntax-aware Graph-to-Graph Transformer (SynG2G-Tr) architecture, which encodes the syntactic structure with a novel way to input graph relations as embeddings directly into the self-attention mechanism of Transformer. This approach adds a soft bias towards attention patterns that follow the syntactic structure but also allows the model to use this information to learn alternative patterns. We evaluate our model on both dependency-based and span-based SRL datasets, and outperform all previous syntax-aware and syntax-agnostic models in both in-domain and out-of-domain settings, on the CoNLL 2005 and CoNLL 2009 datasets. Our architecture is general and can be applied to encode any graph information for a desired downstream task.

Recursive Non-Autoregressive Graph-to-Graph Transformer for
Dependency Parsing with Iterative Refinement (2021)
Alireza Mohammadshahi, James Henderson

We propose the Recursive Non-autoregressive Graph-to-Graph Transformer architecture (RNGTr) for the iterative refinement of arbitrary graphs through the recursive application of a non-autoregressive Graph-to-Graph Transformer and apply it to syntactic dependency parsing. We demonstrate the power and effectiveness of RNGTr on several dependency corpora, using a refinement model pre-trained with BERT. We also introduce Syntactic Transformer (SynTr), a non-recursive parser similar to our refinement model. RNGTr can improve the accuracy of a variety of initial parsers on 13 languages from the Universal Dependencies Treebanks, English and Chinese Penn Treebanks, and the German CoNLL2009 corpus, even improving over the new state-of-the-art results achieved by SynTr, significantly improving the state-of-the-art for all corpora tested.

Transactions of the Association for Computational Linguistics (2021) 9: 120–138. 

https://doi.org/10.1162/tacl_a_00358

Presented at EACL 2021: https://2021.eacl.org/

Code: https://github.com/idiap/g2g-transformer

Graph-to-Graph Transformer for Transition-based Dependency
Parsing (2020)
Alireza Mohammadshahi, James Henderson

We propose the Graph2Graph Transformer architecture for conditioning on and predicting arbitrary graphs, and apply it to the challenging task of transition-based dependency parsing. After proposing two novel Transformer models of transition-based dependency parsing as strong baselines, we show that adding the proposed mechanisms for conditioning on and predicting graphs of Graph2Graph Transformer results
in significant improvements, both with and without BERT pre-training. The novel baselines and their integration with Graph2Graph Transformer significantly outperform the state-of-the-art in traditional transition-based dependency parsing on both English Penn Treebank, and 13 languages of Universal Dependencies Treebanks. Graph2Graph Transformer can be integrated with many previous structured prediction methods, making it easy to apply to a wide range of NLP tasks.

GILE: A Generalized Input-Label Embedding for Text Classification (2019)
Nikolaos Pappas, James Henderson

Neural text classification models typically treat output labels as categorical variables that lack description and semantics. This forces their parametrization to be dependent on the label set size, and, hence, they are unable to scale to large label sets and generalize to unseen ones. Existing joint input-label text models overcome these issues by exploiting label descriptions, but they are unable to capture complex label relationships, have rigid parametrization, and their gains on unseen labels happen often at the expense of weak performance on the labels seen during training. In this paper, we propose a new input-label model that generalizes over previous such models, addresses their limitations, and does not compromise performance on seen labels. The model consists of a joint nonlinear input-label embedding with controllable capacity and a joint-space-dependent classification unit that is trained with cross-entropy loss to optimize classification performance. We evaluate models on full-resource and low- or zero-resource text classification of multilingual news and biomedical text with a large label set. Our model outperforms monolingual and multilingual models that do not leverage label semantics and previous joint input-label space models in both scenarios.

Transactions of the Association for Computational Linguistics (2019) 7: 139–155. 

https://doi.org/10.1162/tacl_a_00259