:::info
Author:
(1) Mingda Chen.
:::
Table of Links
2.1 Self-Supervised Language Pretraining
2.2 Naturally-Occurring Data Structures
2.3 Sentence Variational Autoencoder
3 IMPROVING SELF-SUPERVISION FOR LANGUAGE PRETRAINING
3.1 Improving Language Representation Learning via Sentence Ordering Prediction
3.2 Improving In-Context Few-Shot Learning via Self-Supervised Training
4 LEARNING SEMANTIC KNOWLEDGE FROM WIKIPEDIA
4.1 Learning Entity Representations from Hyperlinks
4.2 Learning Discourse-Aware Sentence Representations from Document Structures
4.3 Learning Concept Hierarchies from Document Categories
5 DISENTANGLING LATENT REPRESENTATIONS FOR INTERPRETABILITY AND CONTROLLABILITY
5.1 Disentangling Semantics and Syntax in Sentence Representations
5.2 Controllable Paraphrase Generation with a Syntactic Exemplar
6 TAILORING TEXTUAL RESOURCES FOR EVALUATION TASKS
6.1 Long-Form Data-to-Text Generation
6.2 Long-Form Text Summarization
6.3 Story Generation with Constraints
APPENDIX A – APPENDIX TO CHAPTER 3
APPENDIX B – APPENDIX TO CHAPTER 6
CHAPTER 4 – LEARNING SEMANTIC KNOWLEDGE FROM WIKIPEDIA
In this chapter, we describe our contributions to exploiting rich, naturally-occurring structures on Wikipedia for various NLP tasks. In Section 4.1, we use hyperlinks to learn entity representations. The resultant models use contextualized representations rather than a fixed set of vectors for representing entities (unlike most prior work). In Section 4.2, we use article structures (e.g., paragraph positions and section titles) to make sentence representations aware of the broader context in which they situate, leading to improvements across various discourse-related tasks. In Section 4.3, we use article category hierarchies to learn concept hierarchies that improve model performance on textual entailment tasks.
The material in this chapter is adapted from Chen et al. (2019a), Chen et al. (2019b), and Chen et al. (2020a).
:::info
This paper is available on arxiv under CC 4.0 license.
:::