in ,

On Extractive and Abstractive Neural Document Summarization with Transformer Language Models, Hacker News


  

              

                                   (Submitted on 7 Sep 2019)     

Abstract:We present a method to produce abstractive summaries of long documents that exceed several thousand words via neural abstractive summarization. We perform a simple extractive step before generating a summary, which is then used to condition the transformer language model on relevant information before being tasked with generating a summary. We show that this extractive step significantly improves summarization results. We also show that this approach produces more abstractive summaries compared to prior work that employs a copy mechanism while still achieving higher rouge scores. Note: The abstract above was not written by the authors, it was generated by one of the models presented in this paper.

            

      

Submission history

From: Sandeep Subramanian [view email]       
[v1]Sat, 7 Sep 2019 04: 33: (UTC) 3, (KB))

Brave Browser

Payeer

What do you think?

Leave a Reply

Your email address will not be published. Required fields are marked *

GIPHY App Key not set. Please check settings

Apple's iPhone 11 Triggered an Insanely Weird Twitter Meltdown – CCN.com, Crypto Coins News

Apple's iPhone 11 Triggered an Insanely Weird Twitter Meltdown – CCN.com, Crypto Coins News

iPhone 11 Pro and iPhone 11 Pro Max: the most powerful and advanced smartphones, Hacker News

iPhone 11 Pro and iPhone 11 Pro Max: the most powerful and advanced smartphones, Hacker News