Biography

I am currently a 4th-year Ph.D. student in Computer Science department at the University of Central Florida, advised by Dr. Fei Liu. I'm a member of the UCF NLP Group.

I was an undergraduate student researcher at Fudan University during 2012-16, working with Dr. Wei Zhang and Dr. Xiangyang Xue.

Research Interests

My research focuses on developing automatic summarization systems to generate concise and informative summaries from a large collection of documents to support fast browsing of textual content. My current research work combines natural language processing with cutting-edge deep neural models. I'm also interested in reinforcement learning and AI-related topics.

  • Text Generation and Summarization
  • Language Comprehension and Grammar
  • Deep Neural Networks
  • Reinforcement Learning and other AI-related topics

Publications

  • Controlling the Amount of Verbatim Copying in Abstractive Summarization
    Kaiqiang Song, Bingqing Wang, Zhe Feng, Liu Ren and Fei Liu
    Accepted by the Thirty-Fourth AAAI Conference on Artificial Intelligence
    (AAAI) , New York, US, February 2020. [paper][code]
  • Joint Parsing and Generation for Abstractive Summarization
    Kaiqiang Song, Logan Lebanoff, Qipeng Guo, Xipeng Qiu, Xiangyang Xue, Chen Li, Dong Yu and Fei Liu
    Accepted by the Thirty-Fourth AAAI Conference on Artificial Intelligence
    (AAAI) , New York, US, February 2020. [paper][code]
  • Scoring Sentence Singletons and Pairs for Abstractive Summarization
    Logan Lebanoff, Kaiqiang Song, Franck Dernoncourt, Doo Soon Kim, Seokhwan Kim, Walter Chang, and Fei Liu
    Accepted at the 57th Annual Meeting of the Association for Computational Linguistics
    (ACL) , Florence, France, July 2019. [paper][code]
  • Adapting the Neural Encoder-Decoder Framework from Single to Multi-Document Summarization
    Logan Lebanoff, Kaiqiang Song, and Fei Liu
    Accepted at the 2018 Conference on Empirical Methods in Natural Language Processing
    (EMNLP) , Brussels, Belgium, November 2018. [paper][code]
  • Structure-Infused Copy Mechanisms for Abstractive Summarization
    Kaiqiang Song, Lin Zhao, and Fei Liu
    Accepted at the 27th International Conference on Computational Linguistics
    (COLING), Santa Fe, New-Mexico, August 2018. [paper][code] (Oral Presentation)

Projects

Joint Parsing and Generation for Abstractive Summarization

Sentences produced by abstractive summarization systemscan be ungrammatical and fail to preserve the original mean-ings, despite being locally fluent. In this paper we propose toremedy this problem by jointly generating a sentence and itssyntactic dependency parse while performing abstraction. Ifgenerating a word can introduce an erroneous relation to thesummary, the behavior must be discouraged. The proposedmethod thus holds promise for producing grammatical sen-tences and encouraging the summary to stay true-to-original.Our contributions of this work are twofold. First, we presenta novel neural architecture for abstractive summarization thatcombines a sequential decoder with a tree-based decoder in asynchronized manner to generate a summary sentence and itssyntactic parse. Secondly, we describe a novel human evalu-ation protocol to assess if, and to what extent, a summary re-mains true to its original meanings. We evaluate our methodon a number of summarization datasets and demonstrate com-petitive results against strong baselines.

Controlling the Amount of Verbatim Copying in Abstractive Summarization

An abstract must not change the meaning of the original text.A single most effective way to achieve that is to increase theamount of copying while still allowing for text abstraction.Human editors can usually exercise control over copying, re-sulting in summaries that are more extractive than abstractive,or vice versa. However, it remains poorly understood whethermodern neural abstractive summarizers can provide the sameflexibility, i.e., learning from single reference summaries togenerate multiple summary hypotheses with varying degreesof copying. In this paper, we present a neural summarizationmodel that, by learning from single human abstracts, can pro-duce a broad spectrum of summaries ranging frompurelyextractivetohighly generativeones. We frame the task ofsummarization as language modeling and exploit alternativemechanisms to generate summary hypotheses. Our methodallows for control over copying during both training and de-coding stages of a neural summarization model. Through ex-tensive experiments we illustrate the significance of our pro-posed method on controlling the amount of verbatim copy-ing and achieve competitive results over strong baselines. Ouranalysis further reveals interesting and unobvious facts.

Structure-Infused Copy Mechanisms for Abstractive Summarization

The seq2seq paradigm has achieved remarkable success in summarization. However, in many cases, system summaries still struggle to keep the meaning of the original intact. They may miss out important words or relations that play critical roles in the syntactic structure of the source sentences. In this paper, we present structure-infused copy mechanisms to facilitate copying important source words and relations to summaries. The approach naturally combines the dependency structure of source sentences with the copy mechanism of an abstractive summarization framework. It outperforms state-of-the-art systems on the benchmark summarization dataset. Experimental results also demonstrate the effectiveness of the approach at preserving salient source words and dependency relations.

Contact

  • Address

    Kaiqiang Song
    Department of Computer Science, HEC 234
    University of Central Florida
    4000 Central Florida Blvd
    Orlando, FL 32816
  • Map