Concrete Type Inference for Code Optimization using Machine Learning with SMT Solving

Despite the widespread popularity of dynamically typed languages such as Python, it is well known that they pose significant challenges to code optimization due to the lack of concrete type information. To overcome this limitation, many ahead-of-time optimizing compiler approaches for Python rely on...

Full description

Saved in:
Bibliographic Details
Published inProceedings of ACM on programming languages Vol. 7; no. OOPSLA2; pp. 773 - 800
Main Authors Ye, Fangke, Zhao, Jisheng, Shirako, Jun, Sarkar, Vivek
Format Journal Article
LanguageEnglish
Published New York, NY, USA ACM 16.10.2023
Subjects
Online AccessGet full text
ISSN2475-1421
2475-1421
DOI10.1145/3622825

Cover

Abstract Despite the widespread popularity of dynamically typed languages such as Python, it is well known that they pose significant challenges to code optimization due to the lack of concrete type information. To overcome this limitation, many ahead-of-time optimizing compiler approaches for Python rely on programmers to provide optional type information as a prerequisite for extensive code optimization. Since few programmers provide this information, a large majority of Python applications are executed without the benefit of code optimization, thereby contributing collectively to a significant worldwide wastage of compute and energy resources. In this paper, we introduce a new approach to concrete type inference that is shown to be effective in enabling code optimization for dynamically typed languages, without requiring the programmer to provide any type information. We explore three kinds of type inference algorithms in our approach based on: 1) machine learning models including GPT-4, 2) constraint-based inference based on SMT solving, and 3) a combination of 1) and 2). Our approach then uses the output from type inference to generate multi-version code for a bounded number of concrete type options, while also including a catch-all untyped version for the case when no match is found. The typed versions are then amenable to code optimization. Experimental results show that the combined algorithm in 3) delivers far superior precision and performance than the separate algorithms for 1) and 2). The performance improvement due to type inference, in terms of geometric mean speedup across all benchmarks compared to standard Python, when using 3) is 26.4× with Numba as an AOT optimizing back-end and 62.2× with the Intrepydd optimizing compiler as a back-end. These vast performance improvements can have a significant impact on programmers’ productivity, while also reducing their applications’ use of compute and energy resources.
AbstractList Despite the widespread popularity of dynamically typed languages such as Python, it is well known that they pose significant challenges to code optimization due to the lack of concrete type information. To overcome this limitation, many ahead-of-time optimizing compiler approaches for Python rely on programmers to provide optional type information as a prerequisite for extensive code optimization. Since few programmers provide this information, a large majority of Python applications are executed without the benefit of code optimization, thereby contributing collectively to a significant worldwide wastage of compute and energy resources. In this paper, we introduce a new approach to concrete type inference that is shown to be effective in enabling code optimization for dynamically typed languages, without requiring the programmer to provide any type information. We explore three kinds of type inference algorithms in our approach based on: 1) machine learning models including GPT-4, 2) constraint-based inference based on SMT solving, and 3) a combination of 1) and 2). Our approach then uses the output from type inference to generate multi-version code for a bounded number of concrete type options, while also including a catch-all untyped version for the case when no match is found. The typed versions are then amenable to code optimization. Experimental results show that the combined algorithm in 3) delivers far superior precision and performance than the separate algorithms for 1) and 2). The performance improvement due to type inference, in terms of geometric mean speedup across all benchmarks compared to standard Python, when using 3) is 26.4× with Numba as an AOT optimizing back-end and 62.2× with the Intrepydd optimizing compiler as a back-end. These vast performance improvements can have a significant impact on programmers’ productivity, while also reducing their applications’ use of compute and energy resources.
Despite the widespread popularity of dynamically typed languages such as Python, it is well known that they pose significant challenges to code optimization due to the lack of concrete type information. To overcome this limitation, many ahead-of-time optimizing compiler approaches for Python rely on programmers to provide optional type information as a prerequisite for extensive code optimization. Since few programmers provide this information, a large majority of Python applications are executed without the benefit of code optimization, thereby contributing collectively to a significant worldwide wastage of compute and energy resources. In this paper, we introduce a new approach to concrete type inference that is shown to be effective in enabling code optimization for dynamically typed languages, without requiring the programmer to provide any type information. We explore three kinds of type inference algorithms in our approach based on: 1) machine learning models including GPT-4, 2) constraint-based inference based on SMT solving, and 3) a combination of 1) and 2). Our approach then uses the output from type inference to generate multi-version code for a bounded number of concrete type options, while also including a catch-all untyped version for the case when no match is found. The typed versions are then amenable to code optimization. Experimental results show that the combined algorithm in 3) delivers far superior precision and performance than the separate algorithms for 1) and 2). The performance improvement due to type inference, in terms of geometric mean speedup across all benchmarks compared to standard Python, when using 3) is 26.4× with Numba as an AOT optimizing back-end and 62.2× with the Intrepydd optimizing compiler as a back-end. These vast performance improvements can have a significant impact on programmers’ productivity, while also reducing their applications’ use of compute and energy resources.
ArticleNumber 249
Author Zhao, Jisheng
Shirako, Jun
Ye, Fangke
Sarkar, Vivek
Author_xml – sequence: 1
  givenname: Fangke
  orcidid: 0000-0002-8545-6116
  surname: Ye
  fullname: Ye, Fangke
  email: yefangke@gatech.edu
  organization: Georgia Institute of Technology, Atlanta, USA
– sequence: 2
  givenname: Jisheng
  orcidid: 0000-0003-0334-0492
  surname: Zhao
  fullname: Zhao, Jisheng
  email: jisheng.zhao@cc.gatech.edu
  organization: Georgia Institute of Technology, Atlanta, USA
– sequence: 3
  givenname: Jun
  orcidid: 0000-0002-7900-7680
  surname: Shirako
  fullname: Shirako, Jun
  email: shirako@gatech.edu
  organization: Georgia Institute of Technology, Atlanta, USA
– sequence: 4
  givenname: Vivek
  orcidid: 0000-0002-3433-8830
  surname: Sarkar
  fullname: Sarkar, Vivek
  email: vsarkar@gatech.edu
  organization: Georgia Institute of Technology, Atlanta, USA
BookMark eNp9kM1PwkAUxDcGExGJd09700t1P7pt92gaURIIB-DcPJZXWVN2m22R4F9vCWg8eZrJvF9eJnNNes47JOSWs0fOY_UkEyEyoS5IX8SpingseO-PvyLDpvlgjHEt40zqPlnm3pmALdLFoUY6diUGdAZp6QPN_RrprG7t1n5Ba72ju8a6dzoFs7EO6QQhuGOwt-2GzqcLOvfVZxfckMsSqgaHZx2Q5ehlkb9Fk9nrOH-eRCCkbiOpFADTApXE1RrQYJnFSQZaJ5B2fYHhKkGZCmVQGhDacJnyTIvOqUQoOSAPp787V8NhD1VV1MFuIRwKzorjIsV5kQ69P6Em-KYJWP5D3p1IMNtf6Of4DXBWaJk
Cites_doi 10.25080/Majora-92bf1922-00a
10.1145/3236024.3236051
10.1016/0022-0000(78)90014-4
10.1145/3468264.3473135
10.1038/s41592-019-0686-2
10.1145/3385412.3385997
10.1145/3520312.3534862
10.18653/v1/P16-1162
10.1145/3426428.3426915
10.1145/3578360.3580275
10.1145/3368089.3409715
10.1145/2833157.2833162
10.1007/978-3-030-99524-9_24
10.1145/3022671.2984041
10.1145/191080.191130
10.18653/v1/2021.emnlp-main.685
10.1145/3510003.3510038
10.1007/978-3-540-78800-3_24
10.1109/ICSE.2019.00045
10.1145/3446804.3446842
10.1145/3551349.3556912
10.1145/3212695
10.1145/3510003.3510124
ContentType Journal Article
Copyright Owner/Author
Copyright_xml – notice: Owner/Author
DBID AAYXX
CITATION
ADTOC
UNPAY
DOI 10.1145/3622825
DatabaseName CrossRef
Unpaywall for CDI: Periodical Content
Unpaywall
DatabaseTitle CrossRef
DatabaseTitleList
CrossRef
Database_xml – sequence: 1
  dbid: UNPAY
  name: Unpaywall
  url: https://proxy.k.utb.cz/login?url=https://unpaywall.org/
  sourceTypes: Open Access Repository
DeliveryMethod fulltext_linktorsrc
Discipline Computer Science
EISSN 2475-1421
EndPage 800
ExternalDocumentID 10.1145/3622825
10_1145_3622825
3622825
GroupedDBID AAKMM
AAYFX
ACM
ADPZR
AIKLT
ALMA_UNASSIGNED_HOLDINGS
GUFHI
LHSKQ
M~E
OK1
ROL
AAYXX
AEFXT
AEJOY
AKRVB
CITATION
ADTOC
EBS
UNPAY
ID FETCH-LOGICAL-a239t-355aa092e53ebdaecef8468a996a7142a0eb6e3725ce3ca29c137189229c56253
IEDL.DBID UNPAY
ISSN 2475-1421
IngestDate Tue Aug 19 17:14:09 EDT 2025
Wed Oct 01 05:53:57 EDT 2025
Fri Feb 21 01:29:13 EST 2025
IsDoiOpenAccess true
IsOpenAccess true
IsPeerReviewed true
IsScholarly true
Issue OOPSLA2
Keywords Machine Learning
Type Inference
Code Optimization
Python
Language English
License This work is licensed under a Creative Commons Attribution 4.0 International License.
cc-by
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-a239t-355aa092e53ebdaecef8468a996a7142a0eb6e3725ce3ca29c137189229c56253
ORCID 0000-0003-0334-0492
0000-0002-7900-7680
0000-0002-3433-8830
0000-0002-8545-6116
OpenAccessLink https://proxy.k.utb.cz/login?url=https://dl.acm.org/doi/pdf/10.1145/3622825
PageCount 28
ParticipantIDs unpaywall_primary_10_1145_3622825
crossref_primary_10_1145_3622825
acm_primary_3622825
ProviderPackageCode CITATION
AAYXX
PublicationCentury 2000
PublicationDate 2023-10-16
PublicationDateYYYYMMDD 2023-10-16
PublicationDate_xml – month: 10
  year: 2023
  text: 2023-10-16
  day: 16
PublicationDecade 2020
PublicationPlace New York, NY, USA
PublicationPlace_xml – name: New York, NY, USA
PublicationTitle Proceedings of ACM on programming languages
PublicationTitleAbbrev ACM PACMPL
PublicationYear 2023
Publisher ACM
Publisher_xml – name: ACM
References John Plevyak and Andrew A. Chien. 1994. Precise Concrete Type Inference for Object-Oriented Languages. In Proceedings of the Ninth Annual Conference on Object-Oriented Programming Systems, Language, and Applications (OOPSLA ’94). Association for Computing Machinery, New York, NY, USA. 324–340. isbn:0897916883 https://doi.org/10.1145/191080.191130 10.1145/191080.191130
Mostafa Hassan, Caterina Urban, Marco Eilers, and Peter Müller. 2018. MaxSMT-Based Type Inference for Python 3. In Computer Aided Verification, Hana Chockler and Georg Weissenbacher (Eds.). Springer International Publishing, Cham. 12–19. isbn:978-3-319-96142-2
Ben Johnson. 2019. lgc (local graph clustering). https://github.com/prog-eval/prog-eval/tree/master/lgc
Fangke Ye, Jisheng Zhao, and Vivek Sarkar. 2021. Advanced Graph-Based Deep Learning for Probabilistic Type Inference. arxiv:2009.05949.
Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dandelion Mané, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Viégas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. 2015. TensorFlow: Large-Scale Machine Learning on Heterogeneous Systems. https://www.tensorflow.org/ Software available from tensorflow.org
Rob Romijnders. 2017. RobRomijnders/bigclam: Implements the bigCLAM algorithm. https://github.com/RobRomijnders/bigclam
Ben Johnson. 2019. IP-NSW. https://github.com/prog-eval/prog-eval/tree/master/ipnsw
Clark Barrett and Cesare Tinelli. 2018. Satisfiability modulo theories. Springer.
James Bradbury, Roy Frostig, Peter Hawkins, Matthew James Johnson, Chris Leary, Dougal Maclaurin, George Necula, Adam Paszke, Jake VanderPlas, Skye Wanderman-Milne, and Qiao Zhang. 2018. JAX: composable transformations of Python+NumPy programs. http://github.com/google/jax
Haniel Barbosa, Clark W. Barrett, Martin Brain, Gereon Kremer, Hanna Lachnitt, Makai Mann, Abdalrhman Mohamed, Mudathir Mohamed, Aina Niemetz, Andres Nötzli, Alex Ozdemir, Mathias Preiner, Andrew Reynolds, Ying Sheng, Cesare Tinelli, and Yoni Zohar. 2022. cvc5: A Versatile and Industrial-Strength SMT Solver. In Tools and Algorithms for the Construction and Analysis of Systems - 28th International Conference, TACAS 2022, Held as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2022, Munich, Germany, April 2-7, 2022, Proceedings, Part I, Dana Fisman and Grigore Rosu (Eds.) (Lecture Notes in Computer Science, Vol. 13243). Springer, 415–442. https://doi.org/10.1007/978-3-030-99524-9_24 10.1007/978-3-030-99524-9_24
Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language Models are Few-Shot Learners. CoRR, abs/2005.14165 (2020), arXiv:2005.14165. arxiv:2005.14165
Veselin Raychev, Pavol Bielik, and Martin Vechev. 2016. Probabilistic Model for Code with Decision Trees. SIGPLAN Not., 51, 10 (2016), oct, 731–747. issn:0362-1340 https://doi.org/10.1145/3022671.2984041 10.1145/3022671.2984041
Ole Agesen. 1995. The Cartesian Product Algorithm. In ECOOP’95 — Object-Oriented Programming, 9th European Conference, Åarhus, Denmark, August 7–11, 1995, Mario Tokoro and Remo Pareschi (Eds.). Springer Berlin Heidelberg, Berlin, Heidelberg. 2–26. isbn:978-3-540-49538-3
Kevin Jesse, Premkumar T. Devanbu, and Toufique Ahmed. 2021. Learning Type Annotation: Is Big Data Enough? In Proceedings of the 29th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering (ESEC/FSE 2021). Association for Computing Machinery, New York, NY, USA. 1483–1486. isbn:9781450385626 https://doi.org/10.1145/3468264.3473135 10.1145/3468264.3473135
Jiayi Wei, Greg Durrett, and Isil Dillig. 2023. TypeT5: Seq2seq Type Inference using Static Analysis. arxiv:2303.09564.
Yue Wang, Weishi Wang, Shafiq Joty, and Steven C.H. Hoi. 2021. CodeT5: Identifier-aware Unified Pre-trained Encoder-Decoder Models for Code Understanding and Generation. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Online and Punta Cana, Dominican Republic. 8696–8708. https://doi.org/10.18653/v1/2021.emnlp-main.685 10.18653/v1/2021.emnlp-main.685
Wes McKinney. 2010. Data structures for statistical computing in python. In Proceedings of the 9th Python in Science Conference. 445, 51–56.
Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, Parker Schuh, Kensen Shi, Sasha Tsvyashchenko, Joshua Maynez, Abhishek Rao, Parker Barnes, Yi Tay, Noam Shazeer, Vinodkumar Prabhakaran, Emily Reif, Nan Du, Ben Hutchinson, Reiner Pope, James Bradbury, Jacob Austin, Michael Isard, Guy Gur-Ari, Pengcheng Yin, Toju Duke, Anselm Levskaya, Sanjay Ghemawat, Sunipa Dev, Henryk Michalewski, Xavier Garcia, Vedant Misra, Kevin Robinson, Liam Fedus, Denny Zhou, Daphne Ippolito, David Luan, Hyeontaek Lim, Barret Zoph, Alexander Spiridonov, Ryan Sepassi, David Dohan, Shivani Agrawal, Mark Omernick, Andrew M. Dai, Thanumalayan Sankaranarayana Pillai, Marie Pellat, Aitor Lewkowycz, Erica Moreira, Rewon Child, Oleksandr Polozov, Katherine Lee, Zongwei Zhou, Xuezhi Wang, Brennan Saeta, Mark Diaz, Orhan Firat, Michele Catasta, Jason Wei, Kathy Meier-Hellstern, Douglas Eck, Jeff Dean, Slav Petrov, and Noah Fiedel. 2022. PaLM: Scaling Language Modeling with Pathways. arxiv:2204.02311.
Jiayi Wei, Maruth Goyal, Greg Durrett, and Isil Dillig. 2020. LambdaNet: Probabilistic Type Inference using Graph Neural Networks. In International Conference on Learning Representations. https://openreview.net/forum?id=Hkx6hANtwH
Yun Peng, Cuiyun Gao, Zongjie Li, Bowei Gao, David Lo, Qirun Zhang, and Michael Lyu. 2022. Static Inference Meets Deep Learning: A Hybrid Type Inference Approach for Python. In Proceedings of the 44th International Conference on Software Engineering (ICSE ’22). Association for Computing Machinery, New York, NY, USA. 2019–2030. isbn:9781450392211 https://doi.org/10.1145/3510003.3510038 10.1145/3510003.3510038
Michael Pradel, Georgios Gousios, Jason Liu, and Satish Chandra. 2020. TypeWriter: Neural Type Prediction with Search-Based Validation. In Proceedings of the 28th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering (ESEC/FSE 2020). Association for Computing Machinery, New York, NY, USA. 209–220. isbn:9781450370431 https://doi.org/10.1145/3368089.3409715 10.1145/3368089.3409715
Ben Johnson. 2019. Sinkhorn Word Movers Distance (sinkhorn_wmd). https://github.com/prog-eval/prog-eval/tree/master/sinkhorn_wmd
Miguel Á. Abella-González, Pedro Carollo-Fernández, Louis-Noël Pouchet, Fabrice Rastello, and Gabriel Rodríguez. 2021. PolyBench/Python: Benchmarking Python Environments with Polyhedral Optimizations. In Proceedings of the 30th ACM SIGPLAN International Conference on Compiler Construction (CC 2021). Association for Computing Machinery, New York, NY, USA. 59–70. isbn:9781450383257 https://doi.org/10.1145/3446804.3446842 10.1145/3446804.3446842
Miltiadis Allamanis, Earl T. Barr, Soline Ducousso, and Zheng Gao. 2020. Typilus: Neural Type Hints. In Proceedings of the 41st ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI 2020). Association for Computing Machinery, New York, NY, USA. 91–105. isbn:9781450376136 https://doi.org/10.1145/3385412.3385997 10.1145/3385412.3385997
Kyunghyun Cho, Bart van Merrienboer, Dzmitry Bahdanau, and Yoshua Bengio. 2014. On the Properties of Neural Machine Translation: Encoder-Decoder Approaches. arxiv:1409.1259.
OpenAI. 2023. GPT-4 Technical Report. arxiv:2303.08774.
Clark Barrett, Pascal Fontaine, and Cesare Tinelli. 2017. The SMT-LIB Standard: Version 2.6. Department of Computer Science, The University of Iowa. Available at www.SMT-LIB.org
Ben Johnson. 2018. graph-changepoint. https://github.com/bkj/graph-changepoint
Miltiadis Allamanis, Earl T. Barr, Premkumar Devanbu, and Charles Sutton. 2018. A Survey of Machine Learning for Big Code and Naturalness. ACM Comput. Surv., 51, 4 (2018), Article 81, jul, 37 pages. issn:0360-0300 https://doi.org/10.1145/3212695 10.1145/3212695
Robin Milner. 1978. A Theory of Type Polymorphism in Programming. J. Comput. System Sci., 17, 3 (1978), 348–375. issn:0022-0000 https://doi.org/10.1016/0022-0000(78)90014-4 10.1016/0022-0000(78)90014-4
Amir M. Mir, Evaldas Latoškinas, Sebastian Proksch, and Georgios Gousios. 2022. Type4Py: Practical Deep Similarity Learning-Based Type Inference for Python. In Proceedings of the 44th International Conference on Software Engineering (ICSE ’22). Association for Computing Machinery, New York, NY, USA. 2241–2252. isbn:9781450392211 https://doi.org/10.1145/3510003.3510124 10.1145/3510003.3510124
Amir M. Mir, Evaldas Latoskinas, and Georgios Gousios. 2021. ManyTypes4Py: A Benchmark Python Dataset for Machine Learning-based Type Inference. CoRR, abs/2104.04706 (2021), arXiv:2104.04706. arxiv:2104.04706
Siu Kwan Lam, Antoine Pitrou, and Stanley Seibert. 2015. Numba: A LLVM-Based Python JIT Compiler. In Proceedings of the Second Workshop on the LLVM Compiler Infrastructure in HPC (LLVM ’15). Association for Computing Ma
Wei Jiayi (e_1_2_1_44_1) 2020
cr-split#-e_1_2_1_31_1.1
cr-split#-e_1_2_1_31_1.2
Barrett Clark (e_1_2_1_7_1)
e_1_2_1_42_1
e_1_2_1_20_1
e_1_2_1_41_1
e_1_2_1_40_1
e_1_2_1_46_1
e_1_2_1_24_1
e_1_2_1_45_1
e_1_2_1_21_1
e_1_2_1_22_1
e_1_2_1_43_1
e_1_2_1_27_1
Hassan Mostafa (e_1_2_1_14_1)
e_1_2_1_25_1
e_1_2_1_26_1
e_1_2_1_47_1
e_1_2_1_29_1
Brown Tom B. (e_1_2_1_10_1) 2020
Radford Alec (e_1_2_1_36_1) 2019
Barrett Clark (e_1_2_1_8_1)
Paszke Adam (e_1_2_1_32_1) 1912
Agesen Ole (e_1_2_1_3_1) 1995
Bradbury James (e_1_2_1_9_1) 2018
e_1_2_1_5_1
e_1_2_1_6_1
e_1_2_1_12_1
e_1_2_1_35_1
e_1_2_1_4_1
e_1_2_1_13_1
e_1_2_1_34_1
e_1_2_1_1_1
e_1_2_1_33_1
e_1_2_1_2_1
e_1_2_1_11_1
Mir Amir M. (e_1_2_1_28_1) 2021
e_1_2_1_16_1
e_1_2_1_39_1
e_1_2_1_17_1
e_1_2_1_38_1
Le Hung (e_1_2_1_23_1) 2022
e_1_2_1_37_1
e_1_2_1_15_1
e_1_2_1_18_1
e_1_2_1_19_1
References_xml – reference: Ben Johnson. 2019. Sinkhorn Word Movers Distance (sinkhorn_wmd). https://github.com/prog-eval/prog-eval/tree/master/sinkhorn_wmd
– reference: Tong Zhou, Jun Shirako, Anirudh Jain, Sriseshan Srikanth, Thomas M. Conte, Richard Vuduc, and Vivek Sarkar. 2020. Intrepydd: Performance, Productivity, and Portability for Data Science Application Kernels. In Proceedings of the 2020 ACM SIGPLAN International Symposium on New Ideas, New Paradigms, and Reflections on Programming and Software (Onward! 2020). Association for Computing Machinery, New York, NY, USA. 65–83. isbn:9781450381789 https://doi.org/10.1145/3426428.3426915 10.1145/3426428.3426915
– reference: Pauli Virtanen, Ralf Gommers, Travis E. Oliphant, Matt Haberland, Tyler Reddy, David Cournapeau, Evgeni Burovski, Pearu Peterson, Warren Weckesser, Jonathan Bright, Stéfan J. van der Walt, Matthew Brett, Joshua Wilson, K. Jarrod Millman, Nikolay Mayorov, Andrew R. J. Nelson, Eric Jones, Robert Kern, Eric Larson, C J Carey, İlhan Polat, Yu Feng, Eric W. Moore, Jake VanderPlas, Denis Laxalde, Josef Perktold, Robert Cimrman, Ian Henriksen, E. A. Quintero, Charles R. Harris, Anne M. Archibald, Antônio H. Ribeiro, Fabian Pedregosa, Paul van Mulbregt, and SciPy 1.0 Contributors. 2020. SciPy 1.0: Fundamental Algorithms for Scientific Computing in Python. Nature Methods, 17 (2020), 261–272. https://doi.org/10.1038/s41592-019-0686-2 10.1038/s41592-019-0686-2
– reference: Qing Huang, Zhiqiang Yuan, Zhenchang Xing, Xiwei Xu, Liming Zhu, and Qinghua Lu. 2023. Prompt-Tuned Code Language Model as a Neural Knowledge Base for Type Inference in Statically-Typed Partial Code. In Proceedings of the 37th IEEE/ACM International Conference on Automated Software Engineering (ASE ’22). Association for Computing Machinery, New York, NY, USA. Article 79, 13 pages. isbn:9781450394758 https://doi.org/10.1145/3551349.3556912 10.1145/3551349.3556912
– reference: James Bradbury, Roy Frostig, Peter Hawkins, Matthew James Johnson, Chris Leary, Dougal Maclaurin, George Necula, Adam Paszke, Jake VanderPlas, Skye Wanderman-Milne, and Qiao Zhang. 2018. JAX: composable transformations of Python+NumPy programs. http://github.com/google/jax
– reference: Yun Peng, Cuiyun Gao, Zongjie Li, Bowei Gao, David Lo, Qirun Zhang, and Michael Lyu. 2022. Static Inference Meets Deep Learning: A Hybrid Type Inference Approach for Python. In Proceedings of the 44th International Conference on Software Engineering (ICSE ’22). Association for Computing Machinery, New York, NY, USA. 2019–2030. isbn:9781450392211 https://doi.org/10.1145/3510003.3510038 10.1145/3510003.3510038
– reference: Jiayi Wei, Maruth Goyal, Greg Durrett, and Isil Dillig. 2020. LambdaNet: Probabilistic Type Inference using Graph Neural Networks. In International Conference on Learning Representations. https://openreview.net/forum?id=Hkx6hANtwH
– reference: Vincent J. Hellendoorn, Christian Bird, Earl T. Barr, and Miltiadis Allamanis. 2018. Deep Learning Type Inference. In Proceedings of the 2018 26th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering (ESEC/FSE 2018). Association for Computing Machinery, New York, NY, USA. 152–162. isbn:9781450355735 https://doi.org/10.1145/3236024.3236051 10.1145/3236024.3236051
– reference: Clark Barrett, Pascal Fontaine, and Cesare Tinelli. 2017. The SMT-LIB Standard: Version 2.6. Department of Computer Science, The University of Iowa. Available at www.SMT-LIB.org
– reference: Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language Models are Few-Shot Learners. CoRR, abs/2005.14165 (2020), arXiv:2005.14165. arxiv:2005.14165
– reference: Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dandelion Mané, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Viégas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. 2015. TensorFlow: Large-Scale Machine Learning on Heterogeneous Systems. https://www.tensorflow.org/ Software available from tensorflow.org
– reference: Clark Barrett and Cesare Tinelli. 2018. Satisfiability modulo theories. Springer.
– reference: Ole Agesen. 1995. The Cartesian Product Algorithm. In ECOOP’95 — Object-Oriented Programming, 9th European Conference, Åarhus, Denmark, August 7–11, 1995, Mario Tokoro and Remo Pareschi (Eds.). Springer Berlin Heidelberg, Berlin, Heidelberg. 2–26. isbn:978-3-540-49538-3
– reference: OpenAI. 2023. GPT-4 Technical Report. arxiv:2303.08774.
– reference: Miltiadis Allamanis, Earl T. Barr, Premkumar Devanbu, and Charles Sutton. 2018. A Survey of Machine Learning for Big Code and Naturalness. ACM Comput. Surv., 51, 4 (2018), Article 81, jul, 37 pages. issn:0360-0300 https://doi.org/10.1145/3212695 10.1145/3212695
– reference: Kyunghyun Cho, Bart van Merrienboer, Dzmitry Bahdanau, and Yoshua Bengio. 2014. On the Properties of Neural Machine Translation: Encoder-Decoder Approaches. arxiv:1409.1259.
– reference: Miguel Á. Abella-González, Pedro Carollo-Fernández, Louis-Noël Pouchet, Fabrice Rastello, and Gabriel Rodríguez. 2021. PolyBench/Python: Benchmarking Python Environments with Polyhedral Optimizations. In Proceedings of the 30th ACM SIGPLAN International Conference on Compiler Construction (CC 2021). Association for Computing Machinery, New York, NY, USA. 59–70. isbn:9781450383257 https://doi.org/10.1145/3446804.3446842 10.1145/3446804.3446842
– reference: Rob Romijnders. 2017. RobRomijnders/bigclam: Implements the bigCLAM algorithm. https://github.com/RobRomijnders/bigclam
– reference: Frank F. Xu, Uri Alon, Graham Neubig, and Vincent Josua Hellendoorn. 2022. A Systematic Evaluation of Large Language Models of Code. In Proceedings of the 6th ACM SIGPLAN International Symposium on Machine Programming (MAPS 2022). Association for Computing Machinery, New York, NY, USA. 1–10. isbn:9781450392730 https://doi.org/10.1145/3520312.3534862 10.1145/3520312.3534862
– reference: Mostafa Hassan, Caterina Urban, Marco Eilers, and Peter Müller. 2018. MaxSMT-Based Type Inference for Python 3. In Computer Aided Verification, Hana Chockler and Georg Weissenbacher (Eds.). Springer International Publishing, Cham. 12–19. isbn:978-3-319-96142-2
– reference: Jiayi Wei, Greg Durrett, and Isil Dillig. 2023. TypeT5: Seq2seq Type Inference using Static Analysis. arxiv:2303.09564.
– reference: Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language Models are Unsupervised Multitask Learners. OpenAI blog, 1, 8 (2019), 9.
– reference: Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, Parker Schuh, Kensen Shi, Sasha Tsvyashchenko, Joshua Maynez, Abhishek Rao, Parker Barnes, Yi Tay, Noam Shazeer, Vinodkumar Prabhakaran, Emily Reif, Nan Du, Ben Hutchinson, Reiner Pope, James Bradbury, Jacob Austin, Michael Isard, Guy Gur-Ari, Pengcheng Yin, Toju Duke, Anselm Levskaya, Sanjay Ghemawat, Sunipa Dev, Henryk Michalewski, Xavier Garcia, Vedant Misra, Kevin Robinson, Liam Fedus, Denny Zhou, Daphne Ippolito, David Luan, Hyeontaek Lim, Barret Zoph, Alexander Spiridonov, Ryan Sepassi, David Dohan, Shivani Agrawal, Mark Omernick, Andrew M. Dai, Thanumalayan Sankaranarayana Pillai, Marie Pellat, Aitor Lewkowycz, Erica Moreira, Rewon Child, Oleksandr Polozov, Katherine Lee, Zongwei Zhou, Xuezhi Wang, Brennan Saeta, Mark Diaz, Orhan Firat, Michele Catasta, Jason Wei, Kathy Meier-Hellstern, Douglas Eck, Jeff Dean, Slav Petrov, and Noah Fiedel. 2022. PaLM: Scaling Language Modeling with Pathways. arxiv:2204.02311.
– reference: Veselin Raychev, Pavol Bielik, and Martin Vechev. 2016. Probabilistic Model for Code with Decision Trees. SIGPLAN Not., 51, 10 (2016), oct, 731–747. issn:0362-1340 https://doi.org/10.1145/3022671.2984041 10.1145/3022671.2984041
– reference: Yue Wang, Weishi Wang, Shafiq Joty, and Steven C.H. Hoi. 2021. CodeT5: Identifier-aware Unified Pre-trained Encoder-Decoder Models for Code Understanding and Generation. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Online and Punta Cana, Dominican Republic. 8696–8708. https://doi.org/10.18653/v1/2021.emnlp-main.685 10.18653/v1/2021.emnlp-main.685
– reference: Amir M. Mir, Evaldas Latoškinas, Sebastian Proksch, and Georgios Gousios. 2022. Type4Py: Practical Deep Similarity Learning-Based Type Inference for Python. In Proceedings of the 44th International Conference on Software Engineering (ICSE ’22). Association for Computing Machinery, New York, NY, USA. 2241–2252. isbn:9781450392211 https://doi.org/10.1145/3510003.3510124 10.1145/3510003.3510124
– reference: Haniel Barbosa, Clark W. Barrett, Martin Brain, Gereon Kremer, Hanna Lachnitt, Makai Mann, Abdalrhman Mohamed, Mudathir Mohamed, Aina Niemetz, Andres Nötzli, Alex Ozdemir, Mathias Preiner, Andrew Reynolds, Ying Sheng, Cesare Tinelli, and Yoni Zohar. 2022. cvc5: A Versatile and Industrial-Strength SMT Solver. In Tools and Algorithms for the Construction and Analysis of Systems - 28th International Conference, TACAS 2022, Held as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2022, Munich, Germany, April 2-7, 2022, Proceedings, Part I, Dana Fisman and Grigore Rosu (Eds.) (Lecture Notes in Computer Science, Vol. 13243). Springer, 415–442. https://doi.org/10.1007/978-3-030-99524-9_24 10.1007/978-3-030-99524-9_24
– reference: Miltiadis Allamanis, Earl T. Barr, Soline Ducousso, and Zheng Gao. 2020. Typilus: Neural Type Hints. In Proceedings of the 41st ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI 2020). Association for Computing Machinery, New York, NY, USA. 91–105. isbn:9781450376136 https://doi.org/10.1145/3385412.3385997 10.1145/3385412.3385997
– reference: Amir M. Mir, Evaldas Latoskinas, and Georgios Gousios. 2021. ManyTypes4Py: A Benchmark Python Dataset for Machine Learning-based Type Inference. CoRR, abs/2104.04706 (2021), arXiv:2104.04706. arxiv:2104.04706
– reference: Rabee Sohail Malik, Jibesh Patra, and Michael Pradel. 2019. NL2Type: Inferring JavaScript Function Types from Natural Language Information. In 2019 IEEE/ACM 41st International Conference on Software Engineering (ICSE). 304–315. https://doi.org/10.1109/ICSE.2019.00045 10.1109/ICSE.2019.00045
– reference: Leonardo De Moura and Nikolaj Bjørner. 2008. Z3: An efficient SMT solver. In Tools and Algorithms for the Construction and Analysis of Systems: 14th International Conference, TACAS 2008, Held as Part of the Joint European Conferences on Theory and Practice of Software, ETAPS 2008, Budapest, Hungary, March 29-April 6, 2008. Proceedings 14. 337–340.
– reference: Ben Johnson. 2019. lgc (local graph clustering). https://github.com/prog-eval/prog-eval/tree/master/lgc
– reference: Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural Machine Translation of Rare Words with Subword Units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Berlin, Germany. 1715–1725. https://doi.org/10.18653/v1/P16-1162 10.18653/v1/P16-1162
– reference: Ilya Loshchilov and Frank Hutter. 2019. Decoupled Weight Decay Regularization. arxiv:1711.05101.
– reference: Michael Pradel, Georgios Gousios, Jason Liu, and Satish Chandra. 2020. TypeWriter: Neural Type Prediction with Search-Based Validation. In Proceedings of the 28th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering (ESEC/FSE 2020). Association for Computing Machinery, New York, NY, USA. 209–220. isbn:9781450370431 https://doi.org/10.1145/3368089.3409715 10.1145/3368089.3409715
– reference: Siu Kwan Lam, Antoine Pitrou, and Stanley Seibert. 2015. Numba: A LLVM-Based Python JIT Compiler. In Proceedings of the Second Workshop on the LLVM Compiler Infrastructure in HPC (LLVM ’15). Association for Computing Machinery, New York, NY, USA. Article 7, 6 pages. isbn:9781450340052 https://doi.org/10.1145/2833157.2833162 10.1145/2833157.2833162
– reference: Wes McKinney. 2010. Data structures for statistical computing in python. In Proceedings of the 9th Python in Science Conference. 445, 51–56.
– reference: Robin Milner. 1978. A Theory of Type Polymorphism in Programming. J. Comput. System Sci., 17, 3 (1978), 348–375. issn:0022-0000 https://doi.org/10.1016/0022-0000(78)90014-4 10.1016/0022-0000(78)90014-4
– reference: Ariya Shajii, Gabriel Ramirez, Haris Smajlović, Jessica Ray, Bonnie Berger, Saman Amarasinghe, and Ibrahim Numanagić. 2023. Codon: A Compiler for High-Performance Pythonic Applications and DSLs. In Proceedings of the 32nd ACM SIGPLAN International Conference on Compiler Construction (CC 2023). Association for Computing Machinery, New York, NY, USA. 191–202. isbn:9798400700880 https://doi.org/10.1145/3578360.3580275 10.1145/3578360.3580275
– reference: Kevin Jesse, Premkumar T. Devanbu, and Toufique Ahmed. 2021. Learning Type Annotation: Is Big Data Enough? In Proceedings of the 29th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering (ESEC/FSE 2021). Association for Computing Machinery, New York, NY, USA. 1483–1486. isbn:9781450385626 https://doi.org/10.1145/3468264.3473135 10.1145/3468264.3473135
– reference: Hung Le, Yue Wang, Akhilesh Deepak Gotmare, Silvio Savarese, and Steven C. H. Hoi. 2022. CodeRL: Mastering Code Generation through Pretrained Models and Deep Reinforcement Learning. arxiv:2207.01780.
– reference: Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Köpf, Edward Yang, Zach DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. PyTorch: An Imperative Style, High-Performance Deep Learning Library. arxiv:1912.01703.
– reference: John Plevyak and Andrew A. Chien. 1994. Precise Concrete Type Inference for Object-Oriented Languages. In Proceedings of the Ninth Annual Conference on Object-Oriented Programming Systems, Language, and Applications (OOPSLA ’94). Association for Computing Machinery, New York, NY, USA. 324–340. isbn:0897916883 https://doi.org/10.1145/191080.191130 10.1145/191080.191130
– reference: Ben Johnson. 2018. graph-changepoint. https://github.com/bkj/graph-changepoint
– reference: Irene Vlassi Pandi, Earl T. Barr, Andrew D. Gordon, and Charles Sutton. 2020. OptTyper: Probabilistic Type Inference by Optimising Logical and Natural Constraints. https://doi.org/10.48550/ARXIV.2004.00348
– reference: Ben Johnson. 2019. IP-NSW. https://github.com/prog-eval/prog-eval/tree/master/ipnsw
– reference: Fangke Ye, Jisheng Zhao, and Vivek Sarkar. 2021. Advanced Graph-Based Deep Learning for Probabilistic Type Inference. arxiv:2009.05949.
– volume-title: Silvio Savarese, and Steven C. H. Hoi.
  year: 2022
  ident: e_1_2_1_23_1
– ident: e_1_2_1_26_1
  doi: 10.25080/Majora-92bf1922-00a
– volume-title: Language Models are Few-Shot Learners. CoRR, abs/2005.14165
  year: 2020
  ident: e_1_2_1_10_1
– ident: e_1_2_1_15_1
  doi: 10.1145/3236024.3236051
– ident: e_1_2_1_27_1
  doi: 10.1016/0022-0000(78)90014-4
– ident: e_1_2_1_21_1
– ident: e_1_2_1_46_1
– ident: e_1_2_1_38_1
– volume-title: Chris Leary, Dougal Maclaurin, George Necula, Adam Paszke, Jake VanderPlas, Skye Wanderman-Milne, and Qiao Zhang.
  year: 2018
  ident: e_1_2_1_9_1
– ident: e_1_2_1_17_1
  doi: 10.1145/3468264.3473135
– ident: e_1_2_1_41_1
  doi: 10.1038/s41592-019-0686-2
– ident: e_1_2_1_24_1
– ident: e_1_2_1_43_1
– volume-title: Satisfiability modulo theories
  ident: e_1_2_1_8_1
– ident: e_1_2_1_19_1
– ident: e_1_2_1_5_1
  doi: 10.1145/3385412.3385997
– volume-title: International Conference on Learning Representations. https://openreview.net/forum?id=Hkx6hANtwH
  year: 2020
  ident: e_1_2_1_44_1
– ident: e_1_2_1_45_1
  doi: 10.1145/3520312.3534862
– ident: e_1_2_1_20_1
– ident: e_1_2_1_39_1
  doi: 10.18653/v1/P16-1162
– ident: e_1_2_1_47_1
  doi: 10.1145/3426428.3426915
– ident: e_1_2_1_40_1
  doi: 10.1145/3578360.3580275
– ident: e_1_2_1_18_1
– ident: e_1_2_1_35_1
  doi: 10.1145/3368089.3409715
– ident: e_1_2_1_1_1
– ident: e_1_2_1_22_1
  doi: 10.1145/2833157.2833162
– ident: e_1_2_1_12_1
– ident: #cr-split#-e_1_2_1_31_1.1
– ident: e_1_2_1_6_1
  doi: 10.1007/978-3-030-99524-9_24
– ident: e_1_2_1_37_1
  doi: 10.1145/3022671.2984041
– ident: e_1_2_1_34_1
  doi: 10.1145/191080.191130
– volume-title: Language Models are Unsupervised Multitask Learners. OpenAI blog, 1, 8
  year: 2019
  ident: e_1_2_1_36_1
– ident: e_1_2_1_42_1
  doi: 10.18653/v1/2021.emnlp-main.685
– volume-title: MaxSMT-Based Type Inference for Python 3
  ident: e_1_2_1_14_1
– ident: e_1_2_1_33_1
  doi: 10.1145/3510003.3510038
– ident: e_1_2_1_11_1
– volume-title: The SMT-LIB Standard: Version 2.6. Department of Computer Science
  ident: e_1_2_1_7_1
– ident: e_1_2_1_13_1
  doi: 10.1007/978-3-540-78800-3_24
– ident: e_1_2_1_25_1
  doi: 10.1109/ICSE.2019.00045
– volume-title: ManyTypes4Py: A Benchmark Python Dataset for Machine Learning-based Type Inference. CoRR, abs/2104.04706
  year: 2021
  ident: e_1_2_1_28_1
– ident: e_1_2_1_2_1
  doi: 10.1145/3446804.3446842
– volume-title: 9th European Conference, Åarhus, Denmark, August 7–11
  year: 1995
  ident: e_1_2_1_3_1
– ident: e_1_2_1_16_1
  doi: 10.1145/3551349.3556912
– ident: #cr-split#-e_1_2_1_31_1.2
– ident: e_1_2_1_4_1
  doi: 10.1145/3212695
– ident: e_1_2_1_29_1
  doi: 10.1145/3510003.3510124
– volume-title: PyTorch: An Imperative Style
  year: 1912
  ident: e_1_2_1_32_1
SSID ssj0001934839
Score 2.2447574
Snippet Despite the widespread popularity of dynamically typed languages such as Python, it is well known that they pose significant challenges to code optimization...
SourceID unpaywall
crossref
acm
SourceType Open Access Repository
Index Database
Publisher
StartPage 773
SubjectTerms Automated static analysis
Compilers
Data types and structures
Software and its engineering
SubjectTermsDisplay Software and its engineering -- Automated static analysis
Software and its engineering -- Compilers
Software and its engineering -- Data types and structures
Title Concrete Type Inference for Code Optimization using Machine Learning with SMT Solving
URI https://dl.acm.org/doi/10.1145/3622825
https://dl.acm.org/doi/pdf/10.1145/3622825
UnpaywallVersion publishedVersion
Volume 7
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVHPJ
  databaseName: ROAD: Directory of Open Access Scholarly Resources
  customDbUrl:
  eissn: 2475-1421
  dateEnd: 99991231
  omitProxy: true
  ssIdentifier: ssj0001934839
  issn: 2475-1421
  databaseCode: M~E
  dateStart: 20170101
  isFulltext: true
  titleUrlDefault: https://road.issn.org
  providerName: ISSN International Centre
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1LTwIxEG4EDnoRRY34IDXxWtxXd-mREAiaLJrAJngi3bZwcNklusTowd_ulC2IJibcmuyk2UzbmW_amW8QuvWEJ6w4YIQpnxJPSpfE1pSRFmBx6mjCOKFfdMOB34-8hzEdG5ocXQsjE5hnvnrC12d6IaeG0JbeganVhZYlVPEp4O4yqkSDp_az7h7nBZTYnmMXVbHb0trpiPkvp7O_TBf8450nyZYn6VWLlkRvKwJCnUDy0lzmcVN8_qFn3O0nj9ChAZS4XeyAY7Sn0hqqrps1YHN2T1DUyVJAiLnCOvTE9-tCPwyoFXcyqfAjWI-5KcvEOh9-hsNVqqXChoV1hvW1LR6GIzzMEn0VcYqiXnfU6RPTU4Fwx2U5AXjBucUcRV0VS66EmgICaXEIe3gAKuSWin3lBg4VyhXcYcJ2wX0xB0Y6VnLPUDnNUnWOsAw8sJa-sFswCqRgLQ5giwOm5EHsuVYd1UBLk0XBmjExmqkjvF6KzaeiOJr-iNxslug_mYsdZC7RgW4Orz2N7V-hcv66VNcAIfK4gUrhV7dhdtA38mzAqw
linkProvider Unpaywall
linkToUnpaywall http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV3fS8MwEA66PeiL06k4fxHB18y2SdrmcQzHFDaFrTCfRppme7Brh3aI_vVe1mxOQdhboEcol-Tuu-TuO4RumWLKiQNBhPY5YUlCSexMBAkBi3PPEMYp86Lb6_vdiD2O-MjS5JhamCSFeWbLJ3xzpufJxBLa8jswtabQchdVfQ64u4KqUf-59WK6x7GAE5d5blkVuyltnI6a_XI6e4tsLj8_ZJpueJJOrWxJ9L4kIDQJJK_NRRE31dcfesbtfvIQHVhAiVvlDjhCOzqro9qqWQO2Z_cYRe08A4RYaGxCT_ywKvTDgFpxO080fgLrMbNlmdjkw09xb5lqqbFlYZ1ic22LB70hHuSpuYo4QVHnftjuEttTgUiPioIAvJDSEZ7mVMeJ1EpPAIGEEsIeGYAKpaNjX9PA40pTJT2hXAruS3gwMrESPUWVLM_0GcJJwMBa-soNYRQkSoQSwJYETCmDmFGngeqgpfG8ZM0YW800EF4txfpTWRzNf0Ru1kv0n8z5FjIXaN80hzeexvUvUaV4W-grgBBFfG33zjdzBL96
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Concrete+Type+Inference+for+Code+Optimization+using+Machine+Learning+with+SMT+Solving&rft.jtitle=Proceedings+of+ACM+on+programming+languages&rft.au=Ye%2C+Fangke&rft.au=Zhao%2C+Jisheng&rft.au=Shirako%2C+Jun&rft.au=Sarkar%2C+Vivek&rft.date=2023-10-16&rft.issn=2475-1421&rft.eissn=2475-1421&rft.volume=7&rft.issue=OOPSLA2&rft.spage=773&rft.epage=800&rft_id=info:doi/10.1145%2F3622825&rft.externalDBID=n%2Fa&rft.externalDocID=10_1145_3622825
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=2475-1421&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=2475-1421&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=2475-1421&client=summon