in ,

Single Headed Attention RNN: Stop Thinking With Your Head, Hacker News


  

              

                                   Submitted on 26 )

    

Abstract:The leading approaches in language modeling are all obsessed with TV shows of my youth – namely Transformers and Sesame Street. Transformers this, Transformers that, and over here a bonfire worth of GPU-TPU-neuromorphic wafer scale silicon. We opt for the lazy path of old and proven techniques with a fancy crypto inspired acronym: the Single Headed Attention RNN (SHA-RNN). The author’s lone goal is to show that the entire field might have evolved a different direction if we had instead been obsessed with a slightly different acronym and slightly different result. We take a previously strong language model based only on boring LSTMs and get it to within a stone’s throw of a stone’s throw of state-of-the-art byte level language model results on enwik8. We also achieve state-of-the-art on WikiText – 103 – or do we? This work has undergone no intensive hyperparameter optimization and lived entirely on a commodity desktop machine that made the author’s small studio apartment far too warm in the midst of a San Franciscan summer. The final results are achievable in plus or minus 24 hours on a single GPU as the author is impatient. The attention mechanism is also readily extended to large contexts and requires minimal computation. Take that Sesame Street.

            

      

Submission history

From: Stephen Merity [view email]       
[v1]Tue, 26 Nov 2019 09: 45: (UTC) (KB)

Read More
Payeer

What do you think?

Leave a Reply

Your email address will not be published. Required fields are marked *

GIPHY App Key not set. Please check settings

CT scans confirm 17th-century medical mannikins are mostly made of ivory, Ars Technica

CT scans confirm 17th-century medical mannikins are mostly made of ivory, Ars Technica

MediaTek and Intel team up to bring 5G networking to laptops and PCs, Ars Technica

MediaTek and Intel team up to bring 5G networking to laptops and PCs, Ars Technica