Just read twice: closing the recall gap for recurrent language models
Recurrent large language models that compete with Transformers in language modeling perplexity are emerging at a rapid rate (e.g., Mamba, RWKV). Excitingly, these architectures use a constant amount of memory during inference. However, due to the limited memory, recurrent LMs cannot recall and use a...
Saved in:
| Published in | arXiv.org |
|---|---|
| Main Authors | , , , , , , , , |
| Format | Paper |
| Language | English |
| Published |
Ithaca
Cornell University Library, arXiv.org
07.07.2024
|
| Subjects | |
| Online Access | Get full text |
| ISSN | 2331-8422 |
Cover
| Abstract | Recurrent large language models that compete with Transformers in language modeling perplexity are emerging at a rapid rate (e.g., Mamba, RWKV). Excitingly, these architectures use a constant amount of memory during inference. However, due to the limited memory, recurrent LMs cannot recall and use all the information in long contexts leading to brittle in-context learning (ICL) quality. A key challenge for efficient LMs is selecting what information to store versus discard. In this work, we observe the order in which information is shown to the LM impacts the selection difficulty. To formalize this, we show that the hardness of information recall reduces to the hardness of a problem called set disjointness (SD), a quintessential problem in communication complexity that requires a streaming algorithm (e.g., recurrent model) to decide whether inputted sets are disjoint. We empirically and theoretically show that the recurrent memory required to solve SD changes with set order, i.e., whether the smaller set appears first in-context. Our analysis suggests, to mitigate the reliance on data order, we can put information in the right order in-context or process prompts non-causally. Towards that end, we propose: (1) JRT-Prompt, where context gets repeated multiple times in the prompt, effectively showing the model all data orders. This gives \(11.0 \pm 1.3\) points of improvement, averaged across \(16\) recurrent LMs and the \(6\) ICL tasks, with \(11.9\times\) higher throughput than FlashAttention-2 for generation prefill (length \(32\)k, batch size \(16\), NVidia H100). We then propose (2) JRT-RNN, which uses non-causal prefix-linear-attention to process prompts and provides \(99\%\) of Transformer quality at \(360\)M params., \(30\)B tokens and \(96\%\) at \(1.3\)B params., \(50\)B tokens on average across the tasks, with \(19.2\times\) higher throughput for prefill than FA2. |
|---|---|
| AbstractList | Recurrent large language models that compete with Transformers in language modeling perplexity are emerging at a rapid rate (e.g., Mamba, RWKV). Excitingly, these architectures use a constant amount of memory during inference. However, due to the limited memory, recurrent LMs cannot recall and use all the information in long contexts leading to brittle in-context learning (ICL) quality. A key challenge for efficient LMs is selecting what information to store versus discard. In this work, we observe the order in which information is shown to the LM impacts the selection difficulty. To formalize this, we show that the hardness of information recall reduces to the hardness of a problem called set disjointness (SD), a quintessential problem in communication complexity that requires a streaming algorithm (e.g., recurrent model) to decide whether inputted sets are disjoint. We empirically and theoretically show that the recurrent memory required to solve SD changes with set order, i.e., whether the smaller set appears first in-context. Our analysis suggests, to mitigate the reliance on data order, we can put information in the right order in-context or process prompts non-causally. Towards that end, we propose: (1) JRT-Prompt, where context gets repeated multiple times in the prompt, effectively showing the model all data orders. This gives \(11.0 \pm 1.3\) points of improvement, averaged across \(16\) recurrent LMs and the \(6\) ICL tasks, with \(11.9\times\) higher throughput than FlashAttention-2 for generation prefill (length \(32\)k, batch size \(16\), NVidia H100). We then propose (2) JRT-RNN, which uses non-causal prefix-linear-attention to process prompts and provides \(99\%\) of Transformer quality at \(360\)M params., \(30\)B tokens and \(96\%\) at \(1.3\)B params., \(50\)B tokens on average across the tasks, with \(19.2\times\) higher throughput for prefill than FA2. |
| Author | Spector, Benjamin Rao, Ashish Singhal, Aaryan Eyuboglu, Sabri Timalsina, Aman Zhao, Xinyi Atri Rudra Arora, Simran Ré, Christopher |
| Author_xml | – sequence: 1 givenname: Simran surname: Arora fullname: Arora, Simran – sequence: 2 givenname: Aman surname: Timalsina fullname: Timalsina, Aman – sequence: 3 givenname: Aaryan surname: Singhal fullname: Singhal, Aaryan – sequence: 4 givenname: Benjamin surname: Spector fullname: Spector, Benjamin – sequence: 5 givenname: Sabri surname: Eyuboglu fullname: Eyuboglu, Sabri – sequence: 6 givenname: Xinyi surname: Zhao fullname: Zhao, Xinyi – sequence: 7 givenname: Ashish surname: Rao fullname: Rao, Ashish – sequence: 8 fullname: Atri Rudra – sequence: 9 givenname: Christopher surname: Ré fullname: Ré, Christopher |
| BookMark | eNqNissKwjAQRYMo-Oo_DLguxKS11a1UxLX7EtppbIlJzST4-1bwA1xdzjl3zebWWZyxlZByn5aZEEuWEA2cc3EoRJ7LFatukQJ4VC2Ed9_gCRrjqLcawgMn3yhjQKsROue_GL1HG8Aoq6PSCE_XoqEtW3TKECa_3bDdpbqfr-no3SsihXpw0dsp1ZIXRS7K8pjJ_14frso8jg |
| ContentType | Paper |
| Copyright | 2024. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. |
| Copyright_xml | – notice: 2024. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. |
| DBID | 8FE 8FG ABJCF ABUWG AFKRA AZQEC BENPR BGLVJ CCPQU DWQXO HCIFZ L6V M7S PHGZM PHGZT PIMPY PKEHL PQEST PQGLB PQQKQ PQUKI PRINS PTHSS |
| DatabaseName | ProQuest SciTech Collection ProQuest Technology Collection Materials Science & Engineering Collection ProQuest Central (Alumni) ProQuest Central UK/Ireland ProQuest Central Essentials ProQuest Central (subscription) Technology Collection ProQuest One Community College ProQuest Central SciTech Premium Collection ProQuest Engineering Collection Engineering Database ProQuest Central Premium ProQuest One Academic ProQuest Publicly Available Content ProQuest One Academic Middle East (New) ProQuest One Academic Eastern Edition (DO NOT USE) ProQuest One Applied & Life Sciences ProQuest One Academic ProQuest One Academic UKI Edition ProQuest Central China Engineering Collection |
| DatabaseTitle | Publicly Available Content Database Engineering Database Technology Collection ProQuest One Academic Middle East (New) ProQuest Central Essentials ProQuest One Academic Eastern Edition ProQuest Central (Alumni Edition) SciTech Premium Collection ProQuest One Community College ProQuest Technology Collection ProQuest SciTech Collection ProQuest Central China ProQuest Central ProQuest One Applied & Life Sciences ProQuest Engineering Collection ProQuest One Academic UKI Edition ProQuest Central Korea Materials Science & Engineering Collection ProQuest Central (New) ProQuest One Academic ProQuest One Academic (New) Engineering Collection |
| DatabaseTitleList | Publicly Available Content Database |
| Database_xml | – sequence: 1 dbid: 8FG name: ProQuest Technology Collection url: https://search.proquest.com/technologycollection1 sourceTypes: Aggregation Database |
| DeliveryMethod | fulltext_linktorsrc |
| Discipline | Physics |
| EISSN | 2331-8422 |
| Genre | Working Paper/Pre-Print |
| GroupedDBID | 8FE 8FG ABJCF ABUWG AFKRA ALMA_UNASSIGNED_HOLDINGS AZQEC BENPR BGLVJ CCPQU DWQXO FRJ HCIFZ L6V M7S M~E PHGZM PHGZT PIMPY PKEHL PQEST PQGLB PQQKQ PQUKI PRINS PTHSS |
| ID | FETCH-proquest_journals_30775288943 |
| IEDL.DBID | BENPR |
| IngestDate | Mon Jun 30 09:32:37 EDT 2025 |
| IsOpenAccess | true |
| IsPeerReviewed | false |
| IsScholarly | false |
| Language | English |
| LinkModel | DirectLink |
| MergedId | FETCHMERGED-proquest_journals_30775288943 |
| Notes | content type line 50 SourceType-Working Papers-1 ObjectType-Working Paper/Pre-Print-1 |
| OpenAccessLink | https://www.proquest.com/docview/3077528894?pq-origsite=%requestingapplication%&accountid=15518 |
| PQID | 3077528894 |
| PQPubID | 2050157 |
| ParticipantIDs | proquest_journals_3077528894 |
| PublicationCentury | 2000 |
| PublicationDate | 20240707 |
| PublicationDateYYYYMMDD | 2024-07-07 |
| PublicationDate_xml | – month: 07 year: 2024 text: 20240707 day: 07 |
| PublicationDecade | 2020 |
| PublicationPlace | Ithaca |
| PublicationPlace_xml | – name: Ithaca |
| PublicationTitle | arXiv.org |
| PublicationYear | 2024 |
| Publisher | Cornell University Library, arXiv.org |
| Publisher_xml | – name: Cornell University Library, arXiv.org |
| SSID | ssj0002672553 |
| Score | 3.4605293 |
| SecondaryResourceType | preprint |
| Snippet | Recurrent large language models that compete with Transformers in language modeling perplexity are emerging at a rapid rate (e.g., Mamba, RWKV). Excitingly,... |
| SourceID | proquest |
| SourceType | Aggregation Database |
| SubjectTerms | Algorithms Context Hardness Large language models Recall Transformers |
| Title | Just read twice: closing the recall gap for recurrent language models |
| URI | https://www.proquest.com/docview/3077528894 |
| hasFullText | 1 |
| inHoldings | 1 |
| isFullTextHit | |
| isPrint | |
| link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwfV1NS8QwEB12WwRvfuLHugT0WpSkTVJBBKV1EbYsorC3JU1SL4tbtxVv_nYnsdWDsMc00NKQzLyZzLwHcJFQlVSSCjxIgkeOjTBSVSWjSus4sWgMjW-PnhZ88hI_zpP5AIq-F8aVVfY20Rtqs9IuR37JHFcblTKNb-v3yKlGudvVXkJDddIK5sZTjA0hpI4ZK4DwLitmT79ZF8oFYmj2z_B6b5LvQDhTtV3vwsC-7cGWL8LUzT5kTlqLIIozpP3EA3xN9HLlYnmCKA2f43IuyauqCQJNN_yhViJ9ypF4VZvmAM7z7Pl-EvXfXnT7pVn8_R07hAADf3sEpEyvtNTMYLyrENXwlBkqua1YaQVXSh3DaNObTjZPn8I2RQftS0_FCIJ2_WHP0MG25RiGMn8Yd2uHo-lX9g3cNIYu |
| linkProvider | ProQuest |
| linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1LS8NAEB5qg-jNJz6qLqjHqOwmm0QogprS2gdFKvQWNptNL6WJTaX4H_zRzq6JHoSix2yWYcmwM99MZuYDuHCpcFOfeniRPG7raYS2SFPfTqV0XIXGMDHt0f0Bb784T2N3XIOPqhdGl1VWNtEY6iSTOkd-zfSsNur7gXOXv9qaNUr_Xa0oNERJrZA0zYixsrGjq96XGMIVzc4j6vuS0lY4emjblfyoVFcR_QhnuGsNLIc5AYZy1n04GD5_52wo9xCBs19m2_ii1hZYQ5Gr-TbU1GwH1k0Jpyx2IdTEXAQxYEIWS7z-t0ROM50JIIjxcB2VMSUTkROEqfrxazATqRKWxHDiFHtw_ofj70N9ls3UAZA4uJG-ZAlGywIxEQ9YQn2uUhYrjwshDqGxStLR6tdnsNEe9XtRrzPoHsMmRVdvili9BtQX8zd1gq56EZ-WX5DA1f9U8Ak6talf |
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Just+read+twice%3A+closing+the+recall+gap+for+recurrent+language+models&rft.jtitle=arXiv.org&rft.au=Arora%2C+Simran&rft.au=Timalsina%2C+Aman&rft.au=Singhal%2C+Aaryan&rft.au=Spector%2C+Benjamin&rft.date=2024-07-07&rft.pub=Cornell+University+Library%2C+arXiv.org&rft.eissn=2331-8422 |