ByteTransformer: A High-Performance Transformer Boosted for Variable-Length Inputs

Transformers have become keystone models in natural language processing over the past decade. They have achieved great popularity in deep learning applications, but the increasing sizes of the parameter spaces required by transformer models generate a commensurate need to accelerate performance. Nat...

Full description

Saved in:
Bibliographic Details
Published inProceedings - IEEE International Parallel and Distributed Processing Symposium pp. 344 - 355
Main Authors Zhai, Yujia, Jiang, Chengquan, Wang, Leyuan, Jia, Xiaoying, Zhang, Shang, Chen, Zizhong, Liu, Xin, Zhu, Yibo
Format Conference Proceeding
LanguageEnglish
Published IEEE 01.05.2023
Subjects
Online AccessGet full text
ISSN1530-2075
DOI10.1109/IPDPS54959.2023.00042

Cover

Abstract Transformers have become keystone models in natural language processing over the past decade. They have achieved great popularity in deep learning applications, but the increasing sizes of the parameter spaces required by transformer models generate a commensurate need to accelerate performance. Natural language processing problems are also routinely faced with variable-length sequences, as word counts commonly vary among sentences. Existing deep learning frameworks pad variable-length sequences to a maximal length, which adds significant memory and computational overhead. In this paper, we present ByteTransformer, a high-performance transformer boosted for variable-length inputs. We propose a padding-free algorithm that liberates the entire transformer from redundant computations on zero padded tokens. In addition to algorithmic-level optimization, we provide architecture-aware optimizations for transformer functional modules, especially the performance-critical algorithm Multi-Head Attention (MHA). Experimental results on an NVIDIA A100 GPU with variable-length sequence inputs validate that our fused MHA outperforms PyTorch by 6.13x. The end-to-end performance of ByteTransformer for a forward BERT transformer surpasses state-of-the-art transformer frameworks, such as PyTorch JIT, TensorFlow XLA, Tencent TurboTransformer, Microsoft DeepSpeed-Inference and NVIDIA FasterTransformer, by 87%, 131%, 138%, 74% and 55%, respectively. We also demonstrate the general applicability of our optimization methods to other BERT-like models, including ALBERT, DistilBERT, and DeBERTa.
AbstractList Transformers have become keystone models in natural language processing over the past decade. They have achieved great popularity in deep learning applications, but the increasing sizes of the parameter spaces required by transformer models generate a commensurate need to accelerate performance. Natural language processing problems are also routinely faced with variable-length sequences, as word counts commonly vary among sentences. Existing deep learning frameworks pad variable-length sequences to a maximal length, which adds significant memory and computational overhead. In this paper, we present ByteTransformer, a high-performance transformer boosted for variable-length inputs. We propose a padding-free algorithm that liberates the entire transformer from redundant computations on zero padded tokens. In addition to algorithmic-level optimization, we provide architecture-aware optimizations for transformer functional modules, especially the performance-critical algorithm Multi-Head Attention (MHA). Experimental results on an NVIDIA A100 GPU with variable-length sequence inputs validate that our fused MHA outperforms PyTorch by 6.13x. The end-to-end performance of ByteTransformer for a forward BERT transformer surpasses state-of-the-art transformer frameworks, such as PyTorch JIT, TensorFlow XLA, Tencent TurboTransformer, Microsoft DeepSpeed-Inference and NVIDIA FasterTransformer, by 87%, 131%, 138%, 74% and 55%, respectively. We also demonstrate the general applicability of our optimization methods to other BERT-like models, including ALBERT, DistilBERT, and DeBERTa.
Author Zhang, Shang
Chen, Zizhong
Jiang, Chengquan
Wang, Leyuan
Liu, Xin
Zhai, Yujia
Jia, Xiaoying
Zhu, Yibo
Author_xml – sequence: 1
  givenname: Yujia
  surname: Zhai
  fullname: Zhai, Yujia
  organization: University of California,Riverside
– sequence: 2
  givenname: Chengquan
  surname: Jiang
  fullname: Jiang, Chengquan
  organization: ByteDance Ltd
– sequence: 3
  givenname: Leyuan
  surname: Wang
  fullname: Wang, Leyuan
  organization: ByteDance Ltd
– sequence: 4
  givenname: Xiaoying
  surname: Jia
  fullname: Jia, Xiaoying
  organization: ByteDance Ltd
– sequence: 5
  givenname: Shang
  surname: Zhang
  fullname: Zhang, Shang
  organization: NVIDIA Corporation
– sequence: 6
  givenname: Zizhong
  surname: Chen
  fullname: Chen, Zizhong
  organization: University of California,Riverside
– sequence: 7
  givenname: Xin
  surname: Liu
  fullname: Liu, Xin
  email: liuxin.ai@bytedance.com
  organization: ByteDance Ltd
– sequence: 8
  givenname: Yibo
  surname: Zhu
  fullname: Zhu, Yibo
  organization: ByteDance Ltd
BookMark eNpNT8tOwkAUHY0mAvIHmswPFO882plxB_igSRMbRbfkFu5ADbRkpi74e2t04eq8ck5yhuyiaRti7FbARAhwd3n5UL6l2qVuIkGqCQBoecbGzjirUlDKZJk8ZwORKkgkmPSKDWP8BJCgtBuw19mpo2XAJvo2HCjc8ylf1NtdUlL4cbBZE_-X81nbxo42vJf8A0ON1Z6Sgpptt-N5c_zq4jW79LiPNP7DEXt_elzOF0nx8pzPp0VSS9BdokVlvUKjyXiD5NUae5bZ_kGlM5KKBIIH4YzVlFlVIQryopIeN75vqhG7-d2tiWh1DPUBw2klQBijrVXfr3VTyA
CODEN IEEPAD
ContentType Conference Proceeding
DBID 6IE
6IL
CBEJK
RIE
RIL
DOI 10.1109/IPDPS54959.2023.00042
DatabaseName IEEE Electronic Library (IEL) Conference Proceedings
IEEE Xplore POP ALL
IEEE Xplore All Conference Proceedings
IEEE Electronic Library (IEL)
IEEE Proceedings Order Plans (POP All) 1998-Present
DatabaseTitleList
Database_xml – sequence: 1
  dbid: RIE
  name: IEEE Electronic Library (IEL)
  url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/
  sourceTypes: Publisher
DeliveryMethod fulltext_linktorsrc
Discipline Computer Science
EISBN 9798350337662
EISSN 1530-2075
EndPage 355
ExternalDocumentID 10177488
Genre orig-research
GroupedDBID 29O
6IE
6IF
6IH
6IK
6IL
6IN
AAJGR
AAWTH
ABLEC
ADZIZ
ALMA_UNASSIGNED_HOLDINGS
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
CBEJK
CHZPO
IEGSK
IPLJI
OCL
RIE
RIL
ID FETCH-LOGICAL-i204t-41b8f3a74e7f7aef3cae7f68000b46e23e1a0f019784e683baa1ef1b2fadf8f33
IEDL.DBID RIE
IngestDate Wed Aug 27 02:11:45 EDT 2025
IsPeerReviewed false
IsScholarly false
Language English
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-i204t-41b8f3a74e7f7aef3cae7f68000b46e23e1a0f019784e683baa1ef1b2fadf8f33
PageCount 12
ParticipantIDs ieee_primary_10177488
PublicationCentury 2000
PublicationDate 2023-May
PublicationDateYYYYMMDD 2023-05-01
PublicationDate_xml – month: 05
  year: 2023
  text: 2023-May
PublicationDecade 2020
PublicationTitle Proceedings - IEEE International Parallel and Distributed Processing Symposium
PublicationTitleAbbrev IPDPS
PublicationYear 2023
Publisher IEEE
Publisher_xml – name: IEEE
SSID ssj0020349
Score 2.1485415
Snippet Transformers have become keystone models in natural language processing over the past decade. They have achieved great popularity in deep learning...
SourceID ieee
SourceType Publisher
StartPage 344
SubjectTerms BERT
Bit error rate
CUTLASS
Deep learning
Distributed processing
Graphics processing units
Large Language Models
Multi-head Attention
Natural Language Processing
NVIDIA GPU
Optimization methods
Technological innovation
Training
Transformer
Title ByteTransformer: A High-Performance Transformer Boosted for Variable-Length Inputs
URI https://ieeexplore.ieee.org/document/10177488
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1dT4MwFG3cnnyaHzN-pw--slHoWvDNqctmdCG6mb0tLVzUmICZ7EF_vbcFNjUx8a1A-EhbuOeUe84l5Aw_elrKMHR4bEqYyUA52gh1AbGHQLjhaevTfTcWwym_mfVmlVjdamEAwCafQcc07b_8JI-XZqmsa6aPxBnXIA0ZiFKstWJXxmilkugwN-yOoqvoAclPz6hRPN-6cno_SqjYCDJokXF97zJx5LWzLHQn_vxly_jvh9si7bVYj0arMLRNNiDbIa26WgOtXt5dct__KGBS41RYnNMLarI8nGitHaDfjtN-bgUgFDfpI1JqI7JybiF7Kp7pKMOrv7fJdHA9uRw6VUUF58VzeeFwpoPUV5KDTKWC1I8VtgSCRldzAZ4PTLkpoj4ZcBCBr5VikDLtpSpJ8Ux_jzSzPIN9QoFJJaQAESbI4FgSJkGMgS9MegoSJL4HpG36aP5WmmbM6-45_GP_Edk041TmEh6TZrFYwgnG-0Kf2nH-AirNqqo
linkProvider IEEE
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV3JTsMwELWgHOBUliJ2fOCaNotjJ9woULXQVhG0iFtlJxNASCkq6QG-nrGTtICExM1JlEW2k3nPmfeGkDP86CkhwtBisS5hJgJpKS3UBcQeHOGGq4xP92DIu2N28-g_lmJ1o4UBAJN8Bk3dNP_yk2k810tlLT19BM64VbLmM8b8Qq614FfaaqUU6Th22OpFV9E90h9f61Fcz_hyuj-KqJgY0qmTYXX3InXktTnPVTP-_GXM-O_H2ySNpVyPRotAtEVWINsm9apeAy1f3x1y1_7IYVQhVZid0wuq8zysaKkeoN-O0_bUSEAobtIHJNVaZmX1IXvKn2kvw6u_N8i4cz267FplTQXrxbVZbjFHBaknBQORCgmpF0tscYSNtmIcXA8caaeI-0TAgAeektKB1FFuKpMUz_R2SS2bZrBHKDhCcsGBhwlyOCcJkyDG0BcmvoQEqe8-aeg-mrwVthmTqnsO_th_Sta7o0F_0u8Nbw_Jhh6zIrPwiNTy2RyOMfrn6sSM-RdH8633
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Abook&rft.genre=proceeding&rft.title=Proceedings+-+IEEE+International+Parallel+and+Distributed+Processing+Symposium&rft.atitle=ByteTransformer%3A+A+High-Performance+Transformer+Boosted+for+Variable-Length+Inputs&rft.au=Zhai%2C+Yujia&rft.au=Jiang%2C+Chengquan&rft.au=Wang%2C+Leyuan&rft.au=Jia%2C+Xiaoying&rft.date=2023-05-01&rft.pub=IEEE&rft.eissn=1530-2075&rft.spage=344&rft.epage=355&rft_id=info:doi/10.1109%2FIPDPS54959.2023.00042&rft.externalDocID=10177488