An optimized hybrid evolutionary algorithm for accelerating automatic code optimization
The deployments of deep learning models must be highly optimized by experts or hardware suppliers before being used in practice, and it has always been a long-term goal for the compiler community to enable compilers to automatically optimize code. However, there is no feasible solution in practice a...
Saved in:
| Main Authors | , , |
|---|---|
| Format | Conference Proceeding |
| Language | English |
| Published |
SPIE
22.02.2023
|
| Online Access | Get full text |
| ISBN | 9781510662964 1510662960 |
| ISSN | 0277-786X |
| DOI | 10.1117/12.2667392 |
Cover
| Abstract | The deployments of deep learning models must be highly optimized by experts or hardware suppliers before being used in practice, and it has always been a long-term goal for the compiler community to enable compilers to automatically optimize code. However, there is no feasible solution in practice as running a program costs a considerable amount of optimization time to obtain a desired latency. Aiming at making up for the deficiency of long optimization time of TVM compiler, a novel optimized hybrid aging evolutionary algorithm is proposed to predict the running time of the code and accelerate automatic code optimization for Ansor, an auto-tuning framework for TVM. The algorithm alternately removes the worst and oldest individuals in the population during the evolution process. Unlike previous evolutionary algorithm, if an individual seeks to survive in the evolving population for a long time, it must have excellent scalability and flexibility, not just the individual's own adaptability. In this way, this algorithm not only ensures a strong search capability, but also improves the convergence speed and accuracy, significantly reducing the optimization time of tensor programs for deep learning inference. Experimental results show that the algorithm can accelerate convergence speed. For the same task, our algorithm provides 9% to 16% shorter optimization time on average while achieving similar or better optimization quality (i.e., inference time). |
|---|---|
| AbstractList | The deployments of deep learning models must be highly optimized by experts or hardware suppliers before being used in practice, and it has always been a long-term goal for the compiler community to enable compilers to automatically optimize code. However, there is no feasible solution in practice as running a program costs a considerable amount of optimization time to obtain a desired latency. Aiming at making up for the deficiency of long optimization time of TVM compiler, a novel optimized hybrid aging evolutionary algorithm is proposed to predict the running time of the code and accelerate automatic code optimization for Ansor, an auto-tuning framework for TVM. The algorithm alternately removes the worst and oldest individuals in the population during the evolution process. Unlike previous evolutionary algorithm, if an individual seeks to survive in the evolving population for a long time, it must have excellent scalability and flexibility, not just the individual's own adaptability. In this way, this algorithm not only ensures a strong search capability, but also improves the convergence speed and accuracy, significantly reducing the optimization time of tensor programs for deep learning inference. Experimental results show that the algorithm can accelerate convergence speed. For the same task, our algorithm provides 9% to 16% shorter optimization time on average while achieving similar or better optimization quality (i.e., inference time). |
| Author | Zhang, Yasong Wang, Xiaoling Li, Yue |
| Author_xml | – sequence: 1 givenname: Yasong surname: Zhang fullname: Zhang, Yasong organization: National Key Laboratory of Science and Technology on Aerospace Intelligence Control (China) – sequence: 2 givenname: Yue surname: Li fullname: Li, Yue organization: National Key Laboratory of Science and Technology on Aerospace Intelligence Control (China) – sequence: 3 givenname: Xiaoling surname: Wang fullname: Wang, Xiaoling organization: Beijing Aerospace Automatic Control Institute (China) |
| BookMark | eNo1kEFLAzEUhANWsK29-AtyFrbmJZuXzbEUtULBi6K3JZu8bSPbTdndCvXX22I9zRxmPoaZsFGbWmLsDsQcAMwDyLlENMrKKzazpgANAlFazEdsLKQxmSnw84ZN-v5LCFloY8fsY9HytB_iLv5Q4Ntj1cXA6Ts1hyGm1nVH7ppN6uKw3fE6ddx5Tw11bojthrvDkHYn67lPgf457ty8Zde1a3qaXXTK3p8e35arbP36_LJcrLMehJaZsZVBUgVYUYsAXghVIOrcOgu2ymVA6RUQUdC2rhQGrHOvtLQQNCIFNWX3f9x-H6ncd8mfsqdtfQmiPN9Sgiwvt6hf9_VXQQ |
| ContentType | Conference Proceeding |
| Copyright | COPYRIGHT SPIE. Downloading of the abstract is permitted for personal use only. |
| Copyright_xml | – notice: COPYRIGHT SPIE. Downloading of the abstract is permitted for personal use only. |
| DOI | 10.1117/12.2667392 |
| DatabaseTitleList | |
| DeliveryMethod | fulltext_linktorsrc |
| Discipline | Engineering |
| Editor | Hu, Naijing Zhang, Guanglin |
| Editor_xml | – sequence: 1 givenname: Naijing surname: Hu fullname: Hu, Naijing – sequence: 2 givenname: Guanglin surname: Zhang fullname: Zhang, Guanglin |
| EndPage | 125871Z-9 |
| ExternalDocumentID | 10_1117_12_2667392 |
| GroupedDBID | 29O 4.4 5SJ ACGFS ALMA_UNASSIGNED_HOLDINGS EBS F5P FQ0 R.2 RNS RSJ SPBNH UT2 |
| ID | FETCH-LOGICAL-s1052-79b76e38190f0d1c003866549a919b42d62c31eeed59fb36d6f4c35291d566ed3 |
| ISBN | 9781510662964 1510662960 |
| ISSN | 0277-786X |
| IngestDate | Sat Apr 15 18:23:21 EDT 2023 |
| IsPeerReviewed | false |
| IsScholarly | true |
| Language | English |
| LinkModel | OpenURL |
| MergedId | FETCHMERGED-LOGICAL-s1052-79b76e38190f0d1c003866549a919b42d62c31eeed59fb36d6f4c35291d566ed3 |
| Notes | Conference Location: Shanghai, China Conference Date: 2022-09-23|2022-09-25 |
| ParticipantIDs | spie_proceedings_10_1117_12_2667392 |
| PublicationCentury | 2000 |
| PublicationDate | 20230222 |
| PublicationDateYYYYMMDD | 2023-02-22 |
| PublicationDate_xml | – month: 2 year: 2023 text: 20230222 day: 22 |
| PublicationDecade | 2020 |
| PublicationYear | 2023 |
| Publisher | SPIE |
| Publisher_xml | – name: SPIE |
| SSID | ssj0028579 ssib050947510 |
| Score | 2.2206712 |
| Snippet | The deployments of deep learning models must be highly optimized by experts or hardware suppliers before being used in practice, and it has always been a... |
| SourceID | spie |
| SourceType | Publisher |
| StartPage | 125871Z |
| Title | An optimized hybrid evolutionary algorithm for accelerating automatic code optimization |
| URI | http://www.dx.doi.org/10.1117/12.2667392 |
| Volume | 12587 |
| hasFullText | 1 |
| inHoldings | 1 |
| isFullTextHit | |
| isPrint | |
| link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV3JTsMwELVoucCJpYhdluCGAo2zuD5WLAJEEVKLaLlUcey0lSBF0CLRr2cmcRY2CbhEiWUlTuZ55k3smSFkv6EcsKoSdzUKYbk6YjClFLekArbuB74WSZ7t1rV_futedr1uUbgwiS6ZyMNw9m1cyX-kCm0gV4yS_YNk85tCA5yDfOEIEobjJ_L7rZ1pAtWDGf84mgFrHL5h7NWBfjVPxN1wwcNgDM7_8DHdKxmGYGNQ4hiXOJ2M02ytGNSe3efDsnz-L7kXACkf5Ft3kvX_3rRY1DHduqMAKwANyj8SmJMEZhduZ_vm4qNzCVwA08OLNMu40Um44ssbSfHBQoEyzxjNVAkm1_Z9yaaaFkv8oLSTsH92yLAGaVoa71MS7NRV4X2b9U2nCqlwDmpsvnnSumpnKgTTAXIPGY3xuRtemm4xGzfG9mXvlaX8yt_TpK-FBx0Vo8Etfk8jXWIdnSVSK-Ix6U2OgGUyp-MVslhKJblK7poxzcFAUzDQMhhoDgYKYKBlMNAcDBTBQMtgqJHbs9PO8bllimZYL0CVwVsSkvsa_fB6VFd2iEu_WGFaBMIW0mXKZ6FjaxiwJyLp-MqP3BBYOMxQYPZaOWukGo9jvU4oKBnl6MBRPleurNsNqZ0wEnUmwcBr7m6QPfwy_WIKvPS_CmrzV722yEKByW1SnTxP9Q7QvYncNSJ-B7FTTTE |
| linkProvider | EBSCOhost |
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Abook&rft.genre=proceeding&rft.title=An+optimized+hybrid+evolutionary+algorithm+for+accelerating+automatic+code+optimization&rft.au=Zhang%2C+Yasong&rft.au=Li%2C+Yue&rft.au=Wang%2C+Xiaoling&rft.date=2023-02-22&rft.pub=SPIE&rft.isbn=9781510662964&rft.issn=0277-786X&rft.volume=12587&rft.spage=125871Z&rft.epage=125871Z-9&rft_id=info:doi/10.1117%2F12.2667392&rft.externalDocID=10_1117_12_2667392 |
| thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0277-786X&client=summon |
| thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0277-786X&client=summon |
| thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0277-786X&client=summon |