:::info
Author:
(1) Mingda Chen.
:::
Table of Links
2.1 Self-Supervised Language Pretraining
2.2 Naturally-Occurring Data Structures
2.3 Sentence Variational Autoencoder
3 IMPROVING SELF-SUPERVISION FOR LANGUAGE PRETRAINING
3.1 Improving Language Representation Learning via Sentence Ordering Prediction
3.2 Improving In-Context Few-Shot Learning via Self-Supervised Training
4 LEARNING SEMANTIC KNOWLEDGE FROM WIKIPEDIA
4.1 Learning Entity Representations from Hyperlinks
4.2 Learning Discourse-Aware Sentence Representations from Document Structures
4.3 Learning Concept Hierarchies from Document Categories
5 DISENTANGLING LATENT REPRESENTATIONS FOR INTERPRETABILITY AND CONTROLLABILITY
5.1 Disentangling Semantics and Syntax in Sentence Representations
5.2 Controllable Paraphrase Generation with a Syntactic Exemplar
6 TAILORING TEXTUAL RESOURCES FOR EVALUATION TASKS
6.1 Long-Form Data-to-Text Generation
6.2 Long-Form Text Summarization
6.3 Story Generation with Constraints
APPENDIX A – APPENDIX TO CHAPTER 3
APPENDIX B – APPENDIX TO CHAPTER 6
5.3 Summary
In this chapter, we demonstrated the utility of our proposed latent-variable framework in the context of representation learning (Section 5.1) and controllable generation (Section 5.2). In both cases, we leveraged the structures of paired data to disentangle semantics and syntax in sentence representations. We found that the syntactic and semantic latent variables showed desirable characteristics. For controlled generation, we provided human-annotated evaluation sets to promote future research in this direction. In addition, in a follow-up work, we showed that the multi-task, latent-variable framework can generalize to bilingual text corpora (Chen et al., 2020b).
:::info
This paper is available on arxiv under CC 4.0 license.
:::