Advances in Neural Rendering
Synthesizing photo‐realistic images and videos is at the heart of computer graphics and has been the focus of decades of research. Traditionally, synthetic images of a scene are generated using rendering algorithms such as rasterization or ray tracing, which take specifically defined representations...
Saved in:
Published in | Computer graphics forum Vol. 41; no. 2; pp. 703 - 735 |
---|---|
Main Authors | , , , , , , , , , , , , , , , , |
Format | Journal Article |
Language | English |
Published |
Oxford
Blackwell Publishing Ltd
01.05.2022
|
Subjects | |
Online Access | Get full text |
ISSN | 0167-7055 1467-8659 |
DOI | 10.1111/cgf.14507 |
Cover
Abstract | Synthesizing photo‐realistic images and videos is at the heart of computer graphics and has been the focus of decades of research. Traditionally, synthetic images of a scene are generated using rendering algorithms such as rasterization or ray tracing, which take specifically defined representations of geometry and material properties as input. Collectively, these inputs define the actual scene and what is rendered, and are referred to as the scene representation (where a scene consists of one or more objects). Example scene representations are triangle meshes with accompanied textures (e.g., created by an artist), point clouds (e.g., from a depth sensor), volumetric grids (e.g., from a CT scan), or implicit surface functions (e.g., truncated signed distance fields). The reconstruction of such a scene representation from observations using differentiable rendering losses is known as inverse graphics or inverse rendering. Neural rendering is closely related, and combines ideas from classical computer graphics and machine learning to create algorithms for synthesizing images from real‐world observations. Neural rendering is a leap forward towards the goal of synthesizing photo‐realistic image and video content. In recent years, we have seen immense progress in this field through hundreds of publications that show different ways to inject learnable components into the rendering pipeline. This state‐of‐the‐art report on advances in neural rendering focuses on methods that combine classical rendering principles with learned 3D scene representations, often now referred to as neural scene representations. A key advantage of these methods is that they are 3D‐consistent by design, enabling applications such as novel viewpoint synthesis of a captured scene. In addition to methods that handle static scenes, we cover neural scene representations for modeling non‐rigidly deforming objects and scene editing and composition. While most of these approaches are scene‐specific, we also discuss techniques that generalize across object classes and can be used for generative tasks. In addition to reviewing these state‐of‐the‐art methods, we provide an overview of fundamental concepts and definitions used in the current literature. We conclude with a discussion on open challenges and social implications. |
---|---|
AbstractList | Synthesizing photo‐realistic images and videos is at the heart of computer graphics and has been the focus of decades of research. Traditionally, synthetic images of a scene are generated using rendering algorithms such as rasterization or ray tracing, which take specifically defined representations of geometry and material properties as input. Collectively, these inputs define the actual scene and what is rendered, and are referred to as the scene representation (where a scene consists of one or more objects). Example scene representations are triangle meshes with accompanied textures (e.g., created by an artist), point clouds (e.g., from a depth sensor), volumetric grids (e.g., from a CT scan), or implicit surface functions (e.g., truncated signed distance fields). The reconstruction of such a scene representation from observations using differentiable rendering losses is known as inverse graphics or inverse rendering. Neural rendering is closely related, and combines ideas from classical computer graphics and machine learning to create algorithms for synthesizing images from real‐world observations. Neural rendering is a leap forward towards the goal of synthesizing photo‐realistic image and video content. In recent years, we have seen immense progress in this field through hundreds of publications that show different ways to inject learnable components into the rendering pipeline. This state‐of‐the‐art report on advances in neural rendering focuses on methods that combine classical rendering principles with learned 3D scene representations, often now referred to as neural scene representations. A key advantage of these methods is that they are 3D‐consistent by design, enabling applications such as novel viewpoint synthesis of a captured scene. In addition to methods that handle static scenes, we cover neural scene representations for modeling non‐rigidly deforming objects and scene editing and composition. While most of these approaches are scene‐specific, we also discuss techniques that generalize across object classes and can be used for generative tasks. In addition to reviewing these state‐of‐the‐art methods, we provide an overview of fundamental concepts and definitions used in the current literature. We conclude with a discussion on open challenges and social implications. |
Author | Sitzmann, V. Barron, J. T. Tretschk, E. Thies, J. Yifan, W. Martin‐Brualla, R. Zollhöfer, M. Lombardi, S. Theobalt, C. Nießner, M. Mildenhall, B. Tewari, A. Srinivasan, P. Golyanik, V. Wetzstein, G. Lassner, C. Simon, T. |
Author_xml | – sequence: 1 givenname: A. surname: Tewari fullname: Tewari, A. – sequence: 2 givenname: J. surname: Thies fullname: Thies, J. – sequence: 3 givenname: B. surname: Mildenhall fullname: Mildenhall, B. – sequence: 4 givenname: P. surname: Srinivasan fullname: Srinivasan, P. – sequence: 5 givenname: E. surname: Tretschk fullname: Tretschk, E. – sequence: 6 givenname: W. surname: Yifan fullname: Yifan, W. – sequence: 7 givenname: C. surname: Lassner fullname: Lassner, C. – sequence: 8 givenname: V. surname: Sitzmann fullname: Sitzmann, V. – sequence: 9 givenname: R. surname: Martin‐Brualla fullname: Martin‐Brualla, R. – sequence: 10 givenname: S. surname: Lombardi fullname: Lombardi, S. – sequence: 11 givenname: T. surname: Simon fullname: Simon, T. – sequence: 12 givenname: C. surname: Theobalt fullname: Theobalt, C. – sequence: 13 givenname: M. surname: Nießner fullname: Nießner, M. organization: Technical University of Munich – sequence: 14 givenname: J. T. surname: Barron fullname: Barron, J. T. – sequence: 15 givenname: G. surname: Wetzstein fullname: Wetzstein, G. – sequence: 16 givenname: M. surname: Zollhöfer fullname: Zollhöfer, M. – sequence: 17 givenname: V. surname: Golyanik fullname: Golyanik, V. |
BookMark | eNp9j0FLAzEQRoNUsK0evHsoePKw7ew2mWyOpdgqFAXRc8hmk5KyZmuyq_TfG60nQecyc3jfN7wRGfjWG0Iuc5jmaWZ6a6c5ZcBPyDCnyLMSmRiQIeTp5sDYGRnFuAMAypENydWifldemzhxfvJg-qCayZPxtQnOb8_JqVVNNBc_e0xeVrfPy7ts87i-Xy42mS4E5xmWwITgiEyxqhaislowXfNaQUmhEMpSUMxyAYYbawtgnAqL1bwyiKbG-ZhcH3v3oX3rTezkru2DTy9lgVhiQVNPom6OlA5tjMFYuQ_uVYWDzEF-ycskL7_lEzv7xWrXqc61vgvKNf8lPlxjDn9Xy-V6dUx8AnB8alc |
CitedBy_id | crossref_primary_10_1088_1361_6560_ad33b7 crossref_primary_10_1109_TGRS_2024_3399786 crossref_primary_10_1111_cgf_15062 crossref_primary_10_1109_TVCG_2024_3456342 crossref_primary_10_1109_LRA_2023_3273516 crossref_primary_10_1109_TCSVT_2024_3434733 crossref_primary_10_3390_rs15143585 crossref_primary_10_1051_0004_6361_202450053 crossref_primary_10_1109_TITS_2024_3477570 crossref_primary_10_1109_TVCG_2024_3367431 crossref_primary_10_1109_TVCG_2023_3320248 crossref_primary_10_1145_3528223_3530153 crossref_primary_10_1145_3592443 crossref_primary_10_1080_10095020_2023_2296014 crossref_primary_10_1109_TPAMI_2024_3366237 crossref_primary_10_3390_math12203285 crossref_primary_10_1145_3592407 crossref_primary_10_3390_math11143243 crossref_primary_10_1145_3658163 crossref_primary_10_1016_j_cag_2024_01_013 crossref_primary_10_1109_TCSI_2022_3231863 crossref_primary_10_1111_cgf_15198 crossref_primary_10_1109_TMM_2024_3371792 crossref_primary_10_1007_s00371_024_03368_5 crossref_primary_10_1111_cgf_14981 crossref_primary_10_1109_LRA_2023_3240362 crossref_primary_10_1109_TMRB_2024_3407369 crossref_primary_10_1111_cgf_14943 crossref_primary_10_1088_1361_6560_ada519 crossref_primary_10_1109_TAES_2023_3303856 crossref_primary_10_1145_3592433 crossref_primary_10_1016_j_compag_2024_109523 crossref_primary_10_1109_LRA_2023_3313937 crossref_primary_10_1016_j_cad_2023_103655 crossref_primary_10_1145_3651282 crossref_primary_10_3390_s23146456 crossref_primary_10_1109_TVCG_2023_3323578 crossref_primary_10_1145_3708343 crossref_primary_10_3390_rs15184628 crossref_primary_10_1145_3592426 crossref_primary_10_1016_j_optcom_2024_130920 crossref_primary_10_1145_3618340 crossref_primary_10_1360_SSI_2022_0319 crossref_primary_10_1007_s00371_024_03299_1 crossref_primary_10_1038_s42256_023_00743_0 crossref_primary_10_1109_LRA_2023_3240646 crossref_primary_10_1007_s11467_023_1325_z crossref_primary_10_1109_TITS_2023_3287912 crossref_primary_10_1007_s11263_023_01936_1 crossref_primary_10_1145_3687914 crossref_primary_10_1007_s00158_024_03834_7 crossref_primary_10_3390_electronics13173546 crossref_primary_10_1016_j_cag_2023_08_024 crossref_primary_10_1016_j_jvcir_2024_104344 crossref_primary_10_1109_TCSI_2023_3293534 crossref_primary_10_1111_cgf_15178 crossref_primary_10_1007_s11263_023_01899_3 crossref_primary_10_1109_TCI_2023_3281196 crossref_primary_10_1111_cgf_15015 crossref_primary_10_3390_rs15184634 crossref_primary_10_3390_rs16234473 crossref_primary_10_1145_3658229 crossref_primary_10_1145_3528223_3530143 crossref_primary_10_1007_s11263_023_01829_3 crossref_primary_10_1145_3592134 crossref_primary_10_1038_s42256_023_00662_0 crossref_primary_10_3390_rs15153808 crossref_primary_10_1145_3649889 crossref_primary_10_1007_s11042_023_17859_5 crossref_primary_10_1145_3658193 |
Cites_doi | 10.1109/CVPR46437.2021.00466 10.1109/2945.468400. 10.1109/CVPR52688.2022.01260 10.1145/3272127.3275109 10.1145/3306346.3323035 10.1109/ICCV48922.2021.00570 10.1145/3386569.3392485 10.1007/978-3-030-58607-2_39 10.1145/3450626.3459787 10.1109/ICCV48922.2021.00572 10.1109/ICCV.2019.00780 10.1007/978-3-030-58598-3_4 10.1109/CVPR52688.2022.00381 10.1109/CVPR52688.2022.00613 10.1109/CVPR52688.2022.01973 10.1111/cgf.14022 10.1109/CVPR46437.2021.01432 10.1109/CVPR46437.2021.00741 10.1016/j.cag.2004.08.014. 10.1109/ICCV.2019.00239 10.1109/CVPR52688.2022.00539 10.1109/CVPR.2019.00459 10.1145/964965.808606. 10.1145/2047196.2047270 10.1145/383259.383266 10.1109/CVPR52688.2022.00541 10.1109/CVPR.2016.262 10.1145/3478513.3480528 10.1007/978-3-030-58452-8_15 10.1109/ICCV.2019.00463 10.1145/15922.15902 10.1145/344779.344936 10.1109/CVPR46437.2021.00541 10.1109/ICCV.2019.00725 10.1109/BigData47090.2019.9005703 10.1109/CVPR42600.2020.00011 10.1145/3476576.3476729 10.1145/3355089.3356513 10.1145/3470848. 10.1109/CVPR42600.2020.00012 10.1111/cgf.14339 10.1109/CVPR46437.2021.00287 10.1109/CVPR46437.2021.00930 10.1109/CVPR46437.2021.00427 10.1145/3306346.3322980 10.1109/CVPR52688.2022.01314 10.1109/CVPR52688.2022.00752 10.1007/978-3-030-58542-6_42 10.1145/1141911.1141964 10.1109/CVPR52688.2022.01258 10.1109/CVPR52688.2022.01571 10.1109/CVPR.2016.595 10.1145/237170.237269 10.1109/CVPR46437.2021.00288 10.1145/2508363.2508374 10.1109/CVPR.2019.00025 10.1109/CVPR46437.2021.01129 10.1109/ICCV48922.2021.00580 10.1126/science.aar6170 10.1109/CVPR46437.2021.00854 10.1109/CVPR42600.2020.00264 10.1109/ICCV48922.2021.01406 10.1109/ICCV48922.2021.00573 10.1109/CVPR52688.2022.01252 10.1109/ICCV.2019.00768 10.1145/1882261.1866201. 10.1109/CVPR46437.2021.01120 10.1111/cgf.14344. 10.1109/CVPR52688.2022.01254 10.1007/978-3-030-58452-8_26 10.1109/ICCV48922.2021.00566 10.1145/166117.166153 10.1609/aaai.v32i1.11671 10.1109/CVPR42600.2020.00209 10.1109/CVPR46437.2021.00713 10.1109/CVPR52688.2022.00538 10.1007/978-3-031-20062-5_16 10.1109/CVPR52688.2022.01572 10.1109/ICCV.2019.00484 10.1109/CVPR46437.2021.00704 10.1145/3450626.3459863. 10.1109/CVPR52688.2022.01041 10.1109/CVPR.2019.00704 10.1109/CVPR52688.2022.01920 10.1145/2766977 10.1109/CVPR.2017.701 10.1109/ICCV48922.2021.00582 10.1109/ICCV48922.2021.01554 10.1109/CVPR.2019.00254 10.1109/ICCV48922.2021.01286 10.1109/CVPR52688.2022.00094 10.1145/2816795.2818013 10.1109/CVPR42600.2020.00063 10.1109/CVPR46437.2021.00574 10.1117/12.386541 10.1007/s003710050084 10.1145/3503161.3547808 10.1145/3072959.3073601. 10.1109/ICCV48922.2021.00541 10.1109/CVPR.2019.00255 10.1109/CVPR46437.2021.00455 10.1109/CVPR52688.2022.01577 10.1109/CVPR46437.2021.00565 10.1109/CVPR46437.2021.00843 10.1109/ICCV48922.2021.00646 10.1007/978-3-030-58517-4_42 10.1145/3355089.3356498 10.1109/CVPR52688.2022.00759 10.1109/CVPR.2017.30 10.1109/CVPR52688.2022.01786 10.1109/CVPR.2019.00247 10.1109/CVPR42600.2020.00261 10.1111/cgf.13369 10.1109/ICCV48922.2021.00554 10.1109/TVCG.2002.1021576 10.1109/CVPR52688.2022.00542 10.1109/CVPR46437.2021.00782 10.1145/3272127.3275047 10.1109/CVPR46437.2021.01261 10.1109/CVPR52688.2022.01782 10.1109/ICCV48922.2021.01405 10.1109/3DV50981.2020.00052 10.1109/ICCV48922.2021.01139 10.1109/CVPR.2018.00439 10.1109/CVPR42600.2020.00133 10.1145/3197517.3201383 10.1109/CVPR46437.2021.01018 10.1109/CVPR.2015.7298631 10.1109/CVPR42600.2020.00356 10.1007/978-3-319-10584-0_11 10.1145/383259.383300 10.1109/ICCV.2019.00009 10.1109/ICCV48922.2021.01072 10.1111/cgf.14340. 10.1109/ICCV48922.2021.01235 10.1109/ICCV48922.2021.00556 10.1109/ICCV48922.2021.00579 10.1145/3355089.3356506 10.1109/CVPR42600.2020.00016 10.1109/CVPR46437.2021.00149 10.1016/0893-6080(89)90020-8. 10.1007/978-3-030-58517-4_18 10.1016/j.cag.2004.08.009. 10.1007/978-3-030-58558-7_7 10.1109/CVPR52688.2022.01318 10.1109/CVPR.2019.00026 10.1109/CVPR46437.2021.00643 10.1109/CVPR42600.2020.00604 10.1109/CVPR.2014.59 10.1007/978-3-030-58526-6_36 10.1109/ICCV48922.2021.01271 10.1109/CVPR42600.2020.00491 10.1109/ICCV48922.2021.00571 10.1145/3306346.3323020 10.1109/ICCV48922.2021.00629 10.1109/ICCV48922.2021.01245 10.1109/3DV53792.2021.00118 10.1145/3478513.3480496 10.1109/CVPR52688.2022.01785 10.1109/CVPR.2018.00411 10.1109/ICCV48922.2021.01272 10.1109/ICCV48922.2021.01483 10.1109/CVPR52688.2022.01573 10.1109/CVPR52688.2022.01255 10.1007/978-3-319-46475-6_43 10.1007/978-3-030-58580-8_31 10.1109/ICCV48922.2021.01408 10.1109/CVPR52688.2022.01807 10.1109/38.656788 10.1109/3DV53792.2021.00104 10.1609/aaai.v32i1.12278 10.1109/ICCV48922.2021.00569 10.1007/978-3-030-01267-0_23 |
ContentType | Journal Article |
Copyright | 2022 The author(s) Computer Graphics Forum © 2022 The Eurographics Association and John Wiley & Sons Ltd. Published by John Wiley & Sons Ltd. 2022 The Eurographics Association and John Wiley & Sons Ltd. |
Copyright_xml | – notice: 2022 The author(s) Computer Graphics Forum © 2022 The Eurographics Association and John Wiley & Sons Ltd. Published by John Wiley & Sons Ltd. – notice: 2022 The Eurographics Association and John Wiley & Sons Ltd. |
DBID | AAYXX CITATION 7SC 8FD JQ2 L7M L~C L~D |
DOI | 10.1111/cgf.14507 |
DatabaseName | CrossRef Computer and Information Systems Abstracts Technology Research Database ProQuest Computer Science Collection Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional |
DatabaseTitle | CrossRef Computer and Information Systems Abstracts Technology Research Database Computer and Information Systems Abstracts – Academic Advanced Technologies Database with Aerospace ProQuest Computer Science Collection Computer and Information Systems Abstracts Professional |
DatabaseTitleList | Computer and Information Systems Abstracts CrossRef |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Engineering |
EISSN | 1467-8659 |
EndPage | 735 |
ExternalDocumentID | 10_1111_cgf_14507 CGF14507 |
Genre | article |
GrantInformation_xml | – fundername: ERC funderid: 770784; 804724 |
GroupedDBID | .3N .4S .DC .GA .Y3 05W 0R~ 10A 15B 1OB 1OC 29F 31~ 33P 3SF 4.4 50Y 50Z 51W 51X 52M 52N 52O 52P 52S 52T 52U 52W 52X 5GY 5HH 5LA 5VS 66C 6J9 702 7PT 8-0 8-1 8-3 8-4 8-5 8UM 8VB 930 A03 AAESR AAEVG AAHHS AAHQN AAMNL AANHP AANLZ AAONW AASGY AAXRX AAYCA AAZKR ABCQN ABCUV ABDBF ABDPE ABEML ABPVW ACAHQ ACBWZ ACCFJ ACCZN ACFBH ACGFS ACPOU ACRPL ACSCC ACUHS ACXBN ACXQS ACYXJ ADBBV ADEOM ADIZJ ADKYN ADMGS ADNMO ADOZA ADXAS ADZMN ADZOD AEEZP AEGXH AEIGN AEIMD AEMOZ AENEX AEQDE AEUQT AEUYR AFBPY AFEBI AFFNX AFFPM AFGKR AFPWT AFWVQ AFZJQ AHBTC AHEFC AHQJS AITYG AIURR AIWBW AJBDE AJXKR AKVCP ALAGY ALMA_UNASSIGNED_HOLDINGS ALUQN ALVPJ AMBMR AMYDB ARCSS ASPBG ATUGU AUFTA AVWKF AZBYB AZFZN AZVAB BAFTC BDRZF BFHJK BHBCM BMNLL BMXJE BNHUX BROTX BRXPI BY8 CAG COF CS3 CWDTD D-E D-F DCZOG DPXWK DR2 DRFUL DRSTM DU5 EAD EAP EBA EBO EBR EBS EBU EDO EJD EMK EST ESX F00 F01 F04 F5P FEDTE FZ0 G-S G.N GODZA H.T H.X HF~ HGLYW HVGLF HZI HZ~ I-F IHE IX1 J0M K1G K48 LATKE LC2 LC3 LEEKS LH4 LITHE LOXES LP6 LP7 LUTES LW6 LYRES MEWTI MK4 MRFUL MRSTM MSFUL MSSTM MXFUL MXSTM N04 N05 N9A NF~ O66 O9- OIG P2W P2X P4D PALCI PQQKQ Q.N Q11 QB0 QWB R.K RDJ RIWAO RJQFR ROL RX1 SAMSI SUPJJ TH9 TN5 TUS UB1 V8K W8V W99 WBKPD WIH WIK WOHZO WQJ WRC WXSBR WYISQ WZISG XG1 ZL0 ZZTAW ~IA ~IF ~WT AAYXX ADMLS AEYWJ AGHNM AGQPQ AGYGG CITATION 7SC 8FD AAMMB AEFGJ AGXDD AIDQK AIDYY JQ2 L7M L~C L~D |
ID | FETCH-LOGICAL-c2977-6805997665a5bd99bfc95cd7da084029af40a5f790e7eff205749f6b3be66ed63 |
IEDL.DBID | DR2 |
ISSN | 0167-7055 |
IngestDate | Sun Sep 07 03:40:22 EDT 2025 Tue Jul 01 02:23:14 EDT 2025 Thu Apr 24 22:57:10 EDT 2025 Wed Jan 22 16:25:48 EST 2025 |
IsPeerReviewed | true |
IsScholarly | true |
Issue | 2 |
Language | English |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c2977-6805997665a5bd99bfc95cd7da084029af40a5f790e7eff205749f6b3be66ed63 |
Notes | Equal contribution. ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
PQID | 2668624840 |
PQPubID | 30877 |
PageCount | 33 |
ParticipantIDs | proquest_journals_2668624840 crossref_primary_10_1111_cgf_14507 crossref_citationtrail_10_1111_cgf_14507 wiley_primary_10_1111_cgf_14507_CGF14507 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | May 2022 2022-05-00 20220501 |
PublicationDateYYYYMMDD | 2022-05-01 |
PublicationDate_xml | – month: 05 year: 2022 text: May 2022 |
PublicationDecade | 2020 |
PublicationPlace | Oxford |
PublicationPlace_xml | – name: Oxford |
PublicationTitle | Computer graphics forum |
PublicationYear | 2022 |
Publisher | Blackwell Publishing Ltd |
Publisher_xml | – name: Blackwell Publishing Ltd |
References | 1989; 2 2015; 34 2018; 360 2011 2010 2004; 28 2002; 8 2020; 39 2009 2019; 38 2008 1996 2006 1993 1992 1995; 1 1996; 12 1998; 18 2018; 18 2020; 2 2001 2017; 36 2022 2000 2021 2010; 29 2020 2019 1986 2018 2017 2016 1984; 18 2015 2014 2013 2010; 2 2021; 41 2021; 40 2018; 37 e_1_2_9_231_2 e_1_2_9_254_2 e_1_2_9_71_2 e_1_2_9_10_2 e_1_2_9_33_2 e_1_2_9_56_2 e_1_2_9_94_2 e_1_2_9_216_2 e_1_2_9_277_2 e_1_2_9_107_2 Li L. (e_1_2_9_114_2) 2020; 2 e_1_2_9_122_2 e_1_2_9_145_2 e_1_2_9_168_2 e_1_2_9_183_2 e_1_2_9_239_2 e_1_2_9_18_2 e_1_2_9_79_2 e_1_2_9_160_2 e_1_2_9_265_2 e_1_2_9_60_2 e_1_2_9_242_2 e_1_2_9_45_2 e_1_2_9_83_2 e_1_2_9_22_2 e_1_2_9_204_2 e_1_2_9_6_2 e_1_2_9_119_2 e_1_2_9_280_2 e_1_2_9_111_2 e_1_2_9_134_2 e_1_2_9_157_2 e_1_2_9_172_2 e_1_2_9_195_2 e_1_2_9_68_2 e_1_2_9_255_2 e_1_2_9_72_2 e_1_2_9_232_2 e_1_2_9_34_2 e_1_2_9_95_2 e_1_2_9_11_2 e_1_2_9_278_2 e_1_2_9_129_2 e_1_2_9_270_2 e_1_2_9_106_2 e_1_2_9_121_2 e_1_2_9_144_2 e_1_2_9_167_2 e_1_2_9_217_2 e_1_2_9_57_2 e_1_2_9_182_2 e_1_2_9_19_2 e_1_2_9_61_2 e_1_2_9_220_2 e_1_2_9_243_2 e_1_2_9_23_2 e_1_2_9_84_2 e_1_2_9_205_2 e_1_2_9_266_2 e_1_2_9_5_2 e_1_2_9_118_2 e_1_2_9_281_2 e_1_2_9_110_2 e_1_2_9_133_2 e_1_2_9_156_2 e_1_2_9_46_2 e_1_2_9_69_2 e_1_2_9_228_2 e_1_2_9_194_2 e_1_2_9_171_2 e_1_2_9_210_2 e_1_2_9_233_2 e_1_2_9_256_2 e_1_2_9_279_2 e_1_2_9_77_2 e_1_2_9_31_2 e_1_2_9_54_2 e_1_2_9_271_2 e_1_2_9_109_2 e_1_2_9_92_2 e_1_2_9_101_2 e_1_2_9_147_2 e_1_2_9_124_2 e_1_2_9_162_2 e_1_2_9_185_2 e_1_2_9_218_2 e_1_2_9_16_2 e_1_2_9_39_2 e_1_2_9_244_2 e_1_2_9_89_2 e_1_2_9_221_2 e_1_2_9_66_2 e_1_2_9_43_2 e_1_2_9_267_2 e_1_2_9_282_2 e_1_2_9_81_2 e_1_2_9_113_2 e_1_2_9_136_2 e_1_2_9_159_2 e_1_2_9_8_2 e_1_2_9_151_2 e_1_2_9_197_2 e_1_2_9_206_2 e_1_2_9_229_2 e_1_2_9_28_2 e_1_2_9_234_2 e_1_2_9_211_2 e_1_2_9_78_2 e_1_2_9_93_2 e_1_2_9_55_2 e_1_2_9_32_2 e_1_2_9_257_2 e_1_2_9_272_2 e_1_2_9_108_2 e_1_2_9_70_2 e_1_2_9_169_2 e_1_2_9_100_2 e_1_2_9_123_2 e_1_2_9_146_2 e_1_2_9_161_2 e_1_2_9_184_2 e_1_2_9_219_2 e_1_2_9_17_2 Pan X. (e_1_2_9_179_2) 2021 e_1_2_9_222_2 e_1_2_9_245_2 e_1_2_9_268_2 e_1_2_9_21_2 e_1_2_9_44_2 e_1_2_9_67_2 e_1_2_9_82_2 e_1_2_9_7_2 e_1_2_9_283_2 e_1_2_9_260_2 e_1_2_9_112_2 e_1_2_9_158_2 Bangaru S. (e_1_2_9_20_2) 2021; 40 e_1_2_9_135_2 e_1_2_9_207_2 e_1_2_9_150_2 e_1_2_9_196_2 e_1_2_9_173_2 e_1_2_9_29_2 e_1_2_9_52_2 e_1_2_9_98_2 e_1_2_9_212_2 e_1_2_9_235_2 e_1_2_9_258_2 e_1_2_9_75_2 e_1_2_9_90_2 e_1_2_9_273_2 e_1_2_9_250_2 e_1_2_9_126_2 e_1_2_9_149_2 e_1_2_9_187_2 e_1_2_9_103_2 e_1_2_9_14_2 e_1_2_9_37_2 e_1_2_9_141_2 e_1_2_9_164_2 Peng S. (e_1_2_9_181_2) 2021 e_1_2_9_41_2 e_1_2_9_87_2 e_1_2_9_223_2 e_1_2_9_269_2 e_1_2_9_200_2 e_1_2_9_64_2 e_1_2_9_246_2 Li L. (e_1_2_9_115_2) 2018; 18 e_1_2_9_284_2 e_1_2_9_2_2 e_1_2_9_261_2 e_1_2_9_138_2 e_1_2_9_176_2 e_1_2_9_199_2 e_1_2_9_49_2 e_1_2_9_130_2 e_1_2_9_153_2 e_1_2_9_26_2 e_1_2_9_208_2 e_1_2_9_191_2 e_1_2_9_30_2 e_1_2_9_99_2 e_1_2_9_213_2 e_1_2_9_259_2 e_1_2_9_76_2 e_1_2_9_53_2 e_1_2_9_236_2 e_1_2_9_91_2 e_1_2_9_274_2 e_1_2_9_251_2 e_1_2_9_102_2 e_1_2_9_125_2 e_1_2_9_148_2 e_1_2_9_38_2 e_1_2_9_140_2 e_1_2_9_186_2 e_1_2_9_15_2 e_1_2_9_163_2 e_1_2_9_88_2 e_1_2_9_201_2 e_1_2_9_42_2 e_1_2_9_65_2 e_1_2_9_224_2 e_1_2_9_247_2 e_1_2_9_80_2 e_1_2_9_262_2 Paszke A. (e_1_2_9_174_2) 2019 e_1_2_9_137_2 e_1_2_9_9_2 e_1_2_9_198_2 e_1_2_9_175_2 e_1_2_9_27_2 e_1_2_9_209_2 e_1_2_9_152_2 e_1_2_9_190_2 e_1_2_9_73_2 e_1_2_9_50_2 e_1_2_9_214_2 e_1_2_9_237_2 e_1_2_9_275_2 e_1_2_9_12_2 e_1_2_9_96_2 e_1_2_9_252_2 e_1_2_9_128_2 e_1_2_9_143_2 e_1_2_9_166_2 e_1_2_9_189_2 Tancik M. (e_1_2_9_227_2) 2020 e_1_2_9_105_2 e_1_2_9_35_2 e_1_2_9_58_2 e_1_2_9_120_2 e_1_2_9_62_2 e_1_2_9_202_2 e_1_2_9_248_2 e_1_2_9_85_2 e_1_2_9_225_2 e_1_2_9_4_2 e_1_2_9_263_2 e_1_2_9_240_2 e_1_2_9_117_2 e_1_2_9_132_2 e_1_2_9_155_2 e_1_2_9_24_2 e_1_2_9_47_2 e_1_2_9_170_2 e_1_2_9_193_2 e_1_2_9_51_2 e_1_2_9_74_2 e_1_2_9_97_2 e_1_2_9_253_2 e_1_2_9_238_2 e_1_2_9_276_2 e_1_2_9_215_2 e_1_2_9_230_2 e_1_2_9_127_2 e_1_2_9_165_2 e_1_2_9_104_2 Park K. (e_1_2_9_178_2) 2021 e_1_2_9_188_2 e_1_2_9_13_2 e_1_2_9_59_2 e_1_2_9_36_2 e_1_2_9_142_2 e_1_2_9_180_2 e_1_2_9_40_2 e_1_2_9_63_2 e_1_2_9_86_2 Laine S. (e_1_2_9_116_2) 2010; 2 e_1_2_9_264_2 e_1_2_9_203_2 e_1_2_9_226_2 e_1_2_9_249_2 e_1_2_9_3_2 e_1_2_9_241_2 e_1_2_9_139_2 e_1_2_9_154_2 e_1_2_9_177_2 e_1_2_9_25_2 e_1_2_9_48_2 e_1_2_9_131_2 e_1_2_9_192_2 |
References_xml | – year: 2021 article-title: A shading-guided generative implicit model for shape-accurate 3d-aware image synthesis publication-title: Advances in Neural Information Processing Systems (NeurIPS) – volume: 39 issue: 4 year: 2020 article-title: Immersive light field video with a layered mesh representation publication-title: ACM Trans. Graph. (SIGGRAPH) – volume: 34 start-page: 248:1 issue: 6 year: 2015 end-page: 248:16 article-title: SMPL: A skinned multi-person linear model publication-title: ACM Trans. Graphics (Proc. SIGGRAPH Asia) – start-page: 4176 year: 2018 end-page: 4184 – start-page: 6498 year: 2021 end-page: 6508 – volume: 41 issue: 1 year: 2021 article-title: Sofgan: A portrait image generator with dynamic styling publication-title: ACM Trans. Graph. – start-page: 15108 year: 2021 end-page: 15117 – year: 2014 – start-page: 835 year: 2006 end-page: 846 – start-page: 559 year: 2011 end-page: 568 – start-page: 12803 year: 2021 end-page: 12813 – start-page: 9054 year: 2021 end-page: 9063 article-title: Neural body: Implicit neural representations with structured latent codes for novel view synthesis of dynamic humans publication-title: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) – year: 2008 – year: 2022 – year: 2021 article-title: Hypernerf: A higher-dimensional representation for topologically varying neural radiance fields publication-title: arXiv preprint arXiv:2106.13228 – start-page: 696 year: 2020 end-page: 712 – start-page: 8295 year: 2019 end-page: 8306 – start-page: 1 year: 2020 end-page: 13 – start-page: 2565 year: 2020 end-page: 2574 – start-page: 2802 year: 2018 end-page: 2812 – volume: 18 start-page: 1 year: 2018 end-page: 52 article-title: Hyperband: A Novel Bandit-Based Approach to Hyperparameter Optimization publication-title: Journal of Machine Learning Research – volume: 2 year: 2020 article-title: A SYSTEM FOR MASSIVELY PARALLEL HYPERPARAMETER TUNING publication-title: MLSys – year: 2019 – start-page: 5939 year: 2019 end-page: 5948 – start-page: 221 year: 2018 end-page: 1 – start-page: 3577 year: 2017 end-page: 3586 – start-page: 667 year: 2020 end-page: 683 – start-page: 371 year: 2018 end-page: 386 – start-page: 4743 year: 2019 end-page: 4752 – start-page: 7708 year: 2019 end-page: 7717 – volume: 2 year: 2010 article-title: Efficient sparse voxel octrees–analysis, extensions, and implementation publication-title: NVIDIA Corporation – volume: 12 start-page: 527 issue: 10 year: 1996 end-page: 545 article-title: Sphere tracing: A geometric method for the antialiased ray tracing of implicit surfaces publication-title: The Visual Computer – start-page: 551 year: 2020 end-page: 560 – year: 2016 – start-page: 371 year: 2001 end-page: 378 – year: 1992 – start-page: 303 year: 1996 end-page: 312 – start-page: 154 year: 2014 end-page: 169 – year: 2010 – start-page: 2447 year: 2019 end-page: 2456 – volume: 1 start-page: 99 issue: 2 year: 1995 end-page: 108 article-title: Optical models for direct volume rendering publication-title: IEEE Transactions on Visualization and Computer Graphics – volume: 28 start-page: 869 issue: 6 year: 2004 end-page: 879 article-title: Point-based rendering techniques publication-title: Computers and Graphics – start-page: 694 year: 2016 end-page: 711 – start-page: 2 year: 2000 end-page: 13 – start-page: 335 year: 2000 end-page: 342 – volume: 40 start-page: 45 issue: 4 year: 2021 end-page: 59 article-title: DONeRF: Towards Real-Time Rendering of Compact Neural Radiance Fields using Depth Oracle Networks publication-title: Computer Graphics Forum – volume: 40 start-page: 107:1 issue: 107 year: 2021 end-page: 107:17 article-title: Systematically differentiating parametric discontinuities publication-title: ACM Trans. Graph. – start-page: 8649 year: 2021 end-page: 8658 – start-page: 343 year: 2015 end-page: 352 – year: 2013 – start-page: 279 year: 1993 end-page: 288 – volume: 38 start-page: 65:1 issue: 4 year: 2019 end-page: 65:14 article-title: Neural volumes: Learning dynamic renderable volumes from images publication-title: ACM Trans. Graph. – start-page: 108 year: 2020 end-page: 124 – start-page: 7588 year: 2019 end-page: 7597 – year: 2009 – volume: 40 start-page: 101 issue: 4 year: 2021 end-page: 113 article-title: Unified Shape and SVBRDF Recovery using Differentiable Monte Carlo Rendering publication-title: Computer Graphics Forum – volume: 18 start-page: 32 issue: 2 year: 1998 end-page: 43 article-title: The irradiance volume publication-title: IEEE Computer Graphics and Applications – year: 2021 – volume: 37 start-page: 222:1 issue: 6 year: 2018 end-page: 222:11 article-title: Differentiable monte carlo ray tracing through edge sampling publication-title: ACM Trans. Graph. (Proc. SIGGRAPH Asia) – start-page: 5846 year: 2021 end-page: 5854 – start-page: 6351 year: 2021 end-page: 6361 – start-page: 2367 year: 2019 end-page: 2376 – year: 2018 – volume: 38 issue: 4 year: 2019 article-title: Local light field fusion: Practical view synthesis with prescriptive sampling guidelines publication-title: ACM Trans. Graph. (SIGGRAPH) – start-page: 2304 year: 2019 end-page: 2314 – start-page: 45 year: 2020 end-page: 54 – year: 2019 article-title: Pytorch: An imperative style, high-performance deep learning library publication-title: Advances in Neural Information Processing Systems – start-page: 293 year: 2020 end-page: 309 – volume: 36 start-page: 98:1 issue: 4 year: 2017 end-page: 98:12 article-title: Interactive reconstruction of monte carlo image sequences using a recurrent denoising autoencoder publication-title: ACM Trans. Graph. – start-page: 9421 year: 2021 end-page: 9431 – volume: 28 start-page: 801 issue: 6 year: 2004 end-page: 814 article-title: A survey of point-based techniques in computer graphics publication-title: Computers and Graphics – start-page: 5965 year: 2019 end-page: 5967 – start-page: 67 year: 2001 end-page: 76 – volume: 18 start-page: 253 issue: 3 year: 1984 end-page: 259 article-title: Compositing digital images publication-title: SIGGRAPH Comput. Graph. – year: 2015 – start-page: 12949 year: 2021 end-page: 12958 – volume: 8 start-page: 223 issue: 3 year: 2002 end-page: 238 article-title: Ewa splatting publication-title: IEEE Transactions on Visualization and Computer Graphics – start-page: 31 year: 1996 end-page: 42 – start-page: 6878 year: 2019 end-page: 6887 – volume: 38 start-page: 1 issue: 4 year: 2019 end-page: 12 article-title: Deferred neural rendering: Image synthesis using neural textures publication-title: ACM Trans. Graph. – volume: 2 start-page: 359 issue: 5 year: 1989 end-page: 366 article-title: Multilayer feedforward networks are universal approximators publication-title: Neural Networks – start-page: 143 year: 1986 end-page: 150 – start-page: 3907 year: 2018 end-page: 3916 – volume: 29 issue: 6 year: 2010 article-title: Fast parallel surface and solid voxelization on gpus publication-title: ACM Trans. Graph. – start-page: 31 year: 2020 end-page: 44 – volume: 36 start-page: 1 issue: 4 year: 2017 end-page: 11 article-title: O-cnn: Octree-based convolutional neural networks for 3d shape analysis publication-title: ACM Transactions On Graphics (TOG) – year: 2006 – volume: 360 start-page: 1204 issue: 6394 year: 2018 end-page: 1210 article-title: Neural scene representation and rendering publication-title: Science – volume: 40 issue: 4 year: 2021 article-title: Mixture of volumetric primitives for efficient neural rendering publication-title: ACM Trans. Graph. – year: 2020 – start-page: 7154 year: 2019 end-page: 7164 – start-page: 51 year: 2020 end-page: 67 – year: 2017 – year: 2020 article-title: Fourier features let networks learn high frequency functions in low dimensional domains publication-title: NeurIPS – volume: 37 start-page: 139:1 issue: 4 year: 2018 end-page: 139:13 article-title: Differentiable programming for image processing and deep learning in Halide publication-title: ACM Trans. Graph. (Proc. SIGGRAPH) – volume: 38 start-page: 201 issue: 6 year: 2019 article-title: Taichi: a language for high-performance computation on spatially sparse data structures publication-title: ACM Transactions on Graphics (TOG) – start-page: 4857 year: 2020 end-page: 4866 – ident: e_1_2_9_248_2 doi: 10.1109/CVPR46437.2021.00466 – ident: e_1_2_9_2_2 – ident: e_1_2_9_137_2 doi: 10.1109/2945.468400. – ident: e_1_2_9_42_2 doi: 10.1109/CVPR52688.2022.01260 – ident: e_1_2_9_106_2 doi: 10.1145/3272127.3275109 – ident: e_1_2_9_230_2 – ident: e_1_2_9_232_2 doi: 10.1145/3306346.3323035 – ident: e_1_2_9_266_2 doi: 10.1109/ICCV48922.2021.00570 – ident: e_1_2_9_23_2 – ident: e_1_2_9_176_2 – ident: e_1_2_9_16_2 doi: 10.1145/3386569.3392485 – ident: e_1_2_9_172_2 doi: 10.1007/978-3-030-58607-2_39 – ident: e_1_2_9_243_2 doi: 10.1145/3450626.3459787 – ident: e_1_2_9_136_2 doi: 10.1109/ICCV48922.2021.00572 – ident: e_1_2_9_119_2 doi: 10.1109/ICCV.2019.00780 – ident: e_1_2_9_47_2 doi: 10.1007/978-3-030-58598-3_4 – ident: e_1_2_9_239_2 doi: 10.1109/CVPR52688.2022.00381 – ident: e_1_2_9_208_2 doi: 10.1109/CVPR52688.2022.00613 – ident: e_1_2_9_188_2 – ident: e_1_2_9_72_2 doi: 10.1109/CVPR52688.2022.01973 – ident: e_1_2_9_221_2 doi: 10.1111/cgf.14022 – ident: e_1_2_9_124_2 doi: 10.1109/CVPR46437.2021.01432 – ident: e_1_2_9_200_2 doi: 10.1109/CVPR46437.2021.00741 – ident: e_1_2_9_209_2 doi: 10.1016/j.cag.2004.08.014. – ident: e_1_2_9_147_2 – ident: e_1_2_9_202_2 doi: 10.1109/ICCV.2019.00239 – ident: e_1_2_9_40_2 – ident: e_1_2_9_22_2 doi: 10.1109/CVPR52688.2022.00539 – ident: e_1_2_9_148_2 doi: 10.1109/CVPR.2019.00459 – ident: e_1_2_9_170_2 doi: 10.1145/964965.808606. – ident: e_1_2_9_79_2 doi: 10.1145/2047196.2047270 – ident: e_1_2_9_26_2 doi: 10.1145/383259.383266 – ident: e_1_2_9_234_2 doi: 10.1109/CVPR52688.2022.00541 – ident: e_1_2_9_233_2 doi: 10.1109/CVPR.2016.262 – ident: e_1_2_9_256_2 – ident: e_1_2_9_113_2 doi: 10.1145/3478513.3480528 – ident: e_1_2_9_251_2 doi: 10.1007/978-3-030-58452-8_15 – ident: e_1_2_9_83_2 – ident: e_1_2_9_164_2 doi: 10.1109/ICCV.2019.00463 – ident: e_1_2_9_91_2 doi: 10.1145/15922.15902 – ident: e_1_2_9_61_2 – ident: e_1_2_9_180_2 doi: 10.1145/344779.344936 – ident: e_1_2_9_275_2 doi: 10.1109/CVPR46437.2021.00541 – ident: e_1_2_9_56_2 doi: 10.1109/ICCV.2019.00725 – ident: e_1_2_9_9_2 – ident: e_1_2_9_143_2 – ident: e_1_2_9_8_2 doi: 10.1109/BigData47090.2019.9005703 – ident: e_1_2_9_6_2 – ident: e_1_2_9_191_2 – volume: 2 year: 2010 ident: e_1_2_9_116_2 article-title: Efficient sparse voxel octrees–analysis, extensions, and implementation publication-title: NVIDIA Corporation – ident: e_1_2_9_43_2 doi: 10.1109/CVPR42600.2020.00011 – ident: e_1_2_9_244_2 – ident: e_1_2_9_263_2 – ident: e_1_2_9_90_2 doi: 10.1145/3476576.3476729 – ident: e_1_2_9_268_2 doi: 10.1145/3355089.3356513 – ident: e_1_2_9_121_2 – ident: e_1_2_9_267_2 – ident: e_1_2_9_222_2 – ident: e_1_2_9_33_2 doi: 10.1145/3470848. – ident: e_1_2_9_38_2 doi: 10.1109/CVPR42600.2020.00012 – ident: e_1_2_9_99_2 doi: 10.1111/cgf.14339 – ident: e_1_2_9_184_2 – ident: e_1_2_9_224_2 doi: 10.1109/CVPR46437.2021.00287 – ident: e_1_2_9_252_2 doi: 10.1109/CVPR46437.2021.00930 – ident: e_1_2_9_97_2 doi: 10.1109/CVPR46437.2021.00427 – ident: e_1_2_9_150_2 doi: 10.1145/3306346.3322980 – ident: e_1_2_9_162_2 doi: 10.1109/CVPR52688.2022.01314 – ident: e_1_2_9_217_2 doi: 10.1109/CVPR52688.2022.00752 – ident: e_1_2_9_11_2 doi: 10.1007/978-3-030-58542-6_42 – start-page: 9054 year: 2021 ident: e_1_2_9_181_2 article-title: Neural body: Implicit neural representations with structured latent codes for novel view synthesis of dynamic humans publication-title: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) – ident: e_1_2_9_213_2 doi: 10.1145/1141911.1141964 – ident: e_1_2_9_225_2 doi: 10.1109/CVPR52688.2022.01258 – ident: e_1_2_9_145_2 doi: 10.1109/CVPR52688.2022.01571 – ident: e_1_2_9_52_2 doi: 10.1109/CVPR.2016.595 – ident: e_1_2_9_198_2 – ident: e_1_2_9_253_2 – ident: e_1_2_9_30_2 doi: 10.1145/237170.237269 – ident: e_1_2_9_165_2 doi: 10.1109/CVPR46437.2021.00288 – ident: e_1_2_9_161_2 doi: 10.1145/2508363.2508374 – ident: e_1_2_9_95_2 – ident: e_1_2_9_76_2 – ident: e_1_2_9_173_2 doi: 10.1109/CVPR.2019.00025 – ident: e_1_2_9_199_2 – ident: e_1_2_9_156_2 doi: 10.1109/CVPR46437.2021.01129 – ident: e_1_2_9_249_2 – ident: e_1_2_9_21_2 doi: 10.1109/ICCV48922.2021.00580 – ident: e_1_2_9_50_2 doi: 10.1126/science.aar6170 – ident: e_1_2_9_65_2 doi: 10.1109/CVPR46437.2021.00854 – ident: e_1_2_9_155_2 – ident: e_1_2_9_219_2 – ident: e_1_2_9_4_2 doi: 10.1109/CVPR42600.2020.00264 – ident: e_1_2_9_48_2 doi: 10.1109/ICCV48922.2021.01406 – ident: e_1_2_9_54_2 doi: 10.1109/ICCV48922.2021.00573 – ident: e_1_2_9_146_2 doi: 10.1109/CVPR52688.2022.01252 – year: 2021 ident: e_1_2_9_178_2 article-title: Hypernerf: A higher-dimensional representation for topologically varying neural radiance fields publication-title: arXiv preprint arXiv:2106.13228 – ident: e_1_2_9_158_2 doi: 10.1109/ICCV.2019.00768 – ident: e_1_2_9_205_2 – ident: e_1_2_9_211_2 doi: 10.1145/1882261.1866201. – ident: e_1_2_9_60_2 – ident: e_1_2_9_223_2 doi: 10.1109/CVPR46437.2021.01120 – ident: e_1_2_9_134_2 doi: 10.1111/cgf.14344. – ident: e_1_2_9_44_2 doi: 10.1109/CVPR52688.2022.01254 – ident: e_1_2_9_281_2 – ident: e_1_2_9_5_2 doi: 10.1007/978-3-030-58452-8_26 – ident: e_1_2_9_63_2 doi: 10.1109/ICCV48922.2021.00566 – ident: e_1_2_9_12_2 – ident: e_1_2_9_39_2 doi: 10.1145/166117.166153 – ident: e_1_2_9_177_2 doi: 10.1609/aaai.v32i1.11671 – ident: e_1_2_9_120_2 – ident: e_1_2_9_135_2 doi: 10.1109/CVPR42600.2020.00209 – ident: e_1_2_9_126_2 – ident: e_1_2_9_18_2 – ident: e_1_2_9_138_2 doi: 10.1109/CVPR46437.2021.00713 – ident: e_1_2_9_212_2 doi: 10.1109/CVPR52688.2022.00538 – ident: e_1_2_9_280_2 – ident: e_1_2_9_284_2 doi: 10.1007/978-3-031-20062-5_16 – ident: e_1_2_9_242_2 doi: 10.1109/CVPR52688.2022.01572 – ident: e_1_2_9_149_2 doi: 10.1109/ICCV.2019.00484 – ident: e_1_2_9_257_2 doi: 10.1109/CVPR46437.2021.00704 – ident: e_1_2_9_269_2 doi: 10.1145/3355089.3356513 – ident: e_1_2_9_129_2 doi: 10.1145/3450626.3459863. – ident: e_1_2_9_46_2 doi: 10.1109/CVPR52688.2022.01041 – ident: e_1_2_9_144_2 doi: 10.1109/CVPR.2019.00704 – ident: e_1_2_9_204_2 – ident: e_1_2_9_271_2 – ident: e_1_2_9_3_2 doi: 10.1109/CVPR52688.2022.01920 – ident: e_1_2_9_94_2 doi: 10.1145/2766977 – ident: e_1_2_9_31_2 – ident: e_1_2_9_154_2 – ident: e_1_2_9_28_2 – ident: e_1_2_9_193_2 doi: 10.1109/CVPR.2017.701 – ident: e_1_2_9_74_2 doi: 10.1109/ICCV48922.2021.00582 – ident: e_1_2_9_258_2 – ident: e_1_2_9_189_2 – ident: e_1_2_9_274_2 doi: 10.1109/ICCV48922.2021.01554 – ident: e_1_2_9_110_2 – ident: e_1_2_9_216_2 doi: 10.1109/CVPR.2019.00254 – ident: e_1_2_9_112_2 doi: 10.1109/ICCV48922.2021.01286 – ident: e_1_2_9_88_2 doi: 10.1109/CVPR52688.2022.00094 – ident: e_1_2_9_66_2 – ident: e_1_2_9_122_2 doi: 10.1145/2816795.2818013 – ident: e_1_2_9_24_2 – ident: e_1_2_9_226_2 doi: 10.1109/CVPR42600.2020.00063 – ident: e_1_2_9_10_2 – ident: e_1_2_9_192_2 – ident: e_1_2_9_259_2 – ident: e_1_2_9_130_2 – ident: e_1_2_9_34_2 doi: 10.1109/CVPR46437.2021.00574 – ident: e_1_2_9_203_2 doi: 10.1117/12.386541 – ident: e_1_2_9_68_2 doi: 10.1007/s003710050084 – ident: e_1_2_9_247_2 doi: 10.1145/3503161.3547808 – ident: e_1_2_9_29_2 doi: 10.1145/3072959.3073601. – ident: e_1_2_9_13_2 doi: 10.1109/ICCV48922.2021.00541 – ident: e_1_2_9_7_2 doi: 10.1109/CVPR.2019.00255 – ident: e_1_2_9_84_2 – ident: e_1_2_9_67_2 – ident: e_1_2_9_272_2 doi: 10.1109/CVPR46437.2021.00455 – ident: e_1_2_9_101_2 – ident: e_1_2_9_70_2 – ident: e_1_2_9_132_2 doi: 10.1109/CVPR52688.2022.01577 – ident: e_1_2_9_262_2 – ident: e_1_2_9_238_2 doi: 10.1109/CVPR46437.2021.00565 – ident: e_1_2_9_246_2 doi: 10.1109/CVPR46437.2021.00843 – ident: e_1_2_9_19_2 – ident: e_1_2_9_163_2 – ident: e_1_2_9_190_2 – ident: e_1_2_9_111_2 – ident: e_1_2_9_250_2 – ident: e_1_2_9_282_2 doi: 10.1109/ICCV48922.2021.00646 – ident: e_1_2_9_220_2 doi: 10.1007/978-3-030-58517-4_42 – ident: e_1_2_9_139_2 – ident: e_1_2_9_152_2 doi: 10.1145/3355089.3356498 – ident: e_1_2_9_25_2 doi: 10.1145/383259.383266 – ident: e_1_2_9_283_2 doi: 10.1109/CVPR52688.2022.00759 – ident: e_1_2_9_231_2 doi: 10.1109/CVPR.2017.30 – ident: e_1_2_9_117_2 – ident: e_1_2_9_57_2 doi: 10.1109/CVPR52688.2022.01786 – ident: e_1_2_9_241_2 – ident: e_1_2_9_278_2 – ident: e_1_2_9_51_2 doi: 10.1109/CVPR.2019.00247 – ident: e_1_2_9_187_2 – volume: 18 start-page: 1 year: 2018 ident: e_1_2_9_115_2 article-title: Hyperband: A Novel Bandit-Based Approach to Hyperparameter Optimization publication-title: Journal of Machine Learning Research – ident: e_1_2_9_206_2 doi: 10.1109/CVPR42600.2020.00261 – ident: e_1_2_9_17_2 – ident: e_1_2_9_73_2 doi: 10.1111/cgf.13369 – ident: e_1_2_9_218_2 – ident: e_1_2_9_166_2 doi: 10.1109/ICCV48922.2021.00554 – ident: e_1_2_9_277_2 doi: 10.1109/TVCG.2002.1021576 – ident: e_1_2_9_260_2 – ident: e_1_2_9_194_2 – ident: e_1_2_9_261_2 doi: 10.1109/CVPR52688.2022.00542 – ident: e_1_2_9_36_2 – ident: e_1_2_9_93_2 – ident: e_1_2_9_96_2 – ident: e_1_2_9_127_2 – ident: e_1_2_9_27_2 doi: 10.1109/CVPR46437.2021.00782 – ident: e_1_2_9_131_2 doi: 10.1145/3272127.3275047 – ident: e_1_2_9_140_2 – ident: e_1_2_9_270_2 doi: 10.1109/CVPR46437.2021.01261 – ident: e_1_2_9_105_2 – ident: e_1_2_9_87_2 doi: 10.1109/CVPR52688.2022.01782 – ident: e_1_2_9_171_2 doi: 10.1109/ICCV48922.2021.01405 – ident: e_1_2_9_167_2 – ident: e_1_2_9_100_2 doi: 10.1109/3DV50981.2020.00052 – year: 2021 ident: e_1_2_9_179_2 article-title: A shading-guided generative implicit model for shape-accurate 3d-aware image synthesis publication-title: Advances in Neural Information Processing Systems (NeurIPS) – ident: e_1_2_9_41_2 doi: 10.1109/ICCV48922.2021.01139 – ident: e_1_2_9_196_2 doi: 10.1109/CVPR.2018.00439 – ident: e_1_2_9_86_2 doi: 10.1109/CVPR42600.2020.00133 – year: 2020 ident: e_1_2_9_227_2 article-title: Fourier features let networks learn high frequency functions in low dimensional domains publication-title: NeurIPS – ident: e_1_2_9_236_2 – ident: e_1_2_9_109_2 doi: 10.1145/3197517.3201383 – ident: e_1_2_9_169_2 doi: 10.1109/CVPR46437.2021.01018 – ident: e_1_2_9_153_2 doi: 10.1109/CVPR.2015.7298631 – ident: e_1_2_9_157_2 doi: 10.1109/CVPR42600.2020.00356 – ident: e_1_2_9_107_2 doi: 10.1007/978-3-319-10584-0_11 – ident: e_1_2_9_15_2 – ident: e_1_2_9_276_2 doi: 10.1145/383259.383300 – ident: e_1_2_9_183_2 doi: 10.1109/ICCV.2019.00009 – ident: e_1_2_9_64_2 – ident: e_1_2_9_197_2 doi: 10.1109/ICCV48922.2021.01072 – ident: e_1_2_9_160_2 doi: 10.1111/cgf.14340. – volume: 2 year: 2020 ident: e_1_2_9_114_2 article-title: A SYSTEM FOR MASSIVELY PARALLEL HYPERPARAMETER TUNING publication-title: MLSys – ident: e_1_2_9_108_2 doi: 10.1109/ICCV48922.2021.01235 – ident: e_1_2_9_245_2 doi: 10.1109/ICCV48922.2021.00556 – ident: e_1_2_9_45_2 – ident: e_1_2_9_81_2 doi: 10.1109/ICCV48922.2021.00579 – ident: e_1_2_9_237_2 – ident: e_1_2_9_59_2 – ident: e_1_2_9_71_2 doi: 10.1145/3355089.3356506 – ident: e_1_2_9_214_2 doi: 10.1109/CVPR42600.2020.00016 – ident: e_1_2_9_185_2 – ident: e_1_2_9_133_2 doi: 10.1109/CVPR46437.2021.00149 – ident: e_1_2_9_75_2 doi: 10.1016/0893-6080(89)90020-8. – ident: e_1_2_9_235_2 – ident: e_1_2_9_228_2 doi: 10.1007/978-3-030-58517-4_18 – ident: e_1_2_9_92_2 doi: 10.1016/j.cag.2004.08.009. – ident: e_1_2_9_78_2 – ident: e_1_2_9_49_2 doi: 10.1007/978-3-030-58558-7_7 – ident: e_1_2_9_265_2 – volume: 40 start-page: 107:1 issue: 107 year: 2021 ident: e_1_2_9_20_2 article-title: Systematically differentiating parametric discontinuities publication-title: ACM Trans. Graph. – ident: e_1_2_9_210_2 – ident: e_1_2_9_273_2 doi: 10.1109/CVPR52688.2022.01318 – ident: e_1_2_9_215_2 doi: 10.1109/CVPR.2019.00026 – ident: e_1_2_9_264_2 – ident: e_1_2_9_195_2 – ident: e_1_2_9_125_2 doi: 10.1109/CVPR46437.2021.00643 – ident: e_1_2_9_89_2 doi: 10.1109/CVPR42600.2020.00604 – ident: e_1_2_9_35_2 – ident: e_1_2_9_85_2 doi: 10.1109/CVPR.2014.59 – ident: e_1_2_9_32_2 doi: 10.1007/978-3-030-58526-6_36 – ident: e_1_2_9_186_2 – ident: e_1_2_9_80_2 doi: 10.1109/ICCV48922.2021.01271 – ident: e_1_2_9_55_2 doi: 10.1109/CVPR42600.2020.00491 – year: 2019 ident: e_1_2_9_174_2 article-title: Pytorch: An imperative style, high-performance deep learning library publication-title: Advances in Neural Information Processing Systems – ident: e_1_2_9_159_2 doi: 10.1109/ICCV48922.2021.00571 – ident: e_1_2_9_128_2 doi: 10.1145/3306346.3323020 – ident: e_1_2_9_142_2 – ident: e_1_2_9_53_2 – ident: e_1_2_9_141_2 doi: 10.1109/ICCV48922.2021.00629 – ident: e_1_2_9_98_2 – ident: e_1_2_9_14_2 doi: 10.1109/ICCV48922.2021.01245 – ident: e_1_2_9_207_2 – ident: e_1_2_9_168_2 doi: 10.1109/3DV53792.2021.00118 – ident: e_1_2_9_201_2 – ident: e_1_2_9_279_2 doi: 10.1145/3478513.3480496 – ident: e_1_2_9_77_2 doi: 10.1109/CVPR52688.2022.01785 – ident: e_1_2_9_103_2 doi: 10.1109/CVPR.2018.00411 – ident: e_1_2_9_229_2 doi: 10.1109/ICCV48922.2021.01272 – ident: e_1_2_9_37_2 doi: 10.1109/ICCV48922.2021.01483 – ident: e_1_2_9_240_2 doi: 10.1109/CVPR52688.2022.01573 – ident: e_1_2_9_182_2 doi: 10.1109/CVPR52688.2022.01255 – ident: e_1_2_9_82_2 doi: 10.1007/978-3-319-46475-6_43 – ident: e_1_2_9_175_2 doi: 10.1007/978-3-030-58580-8_31 – ident: e_1_2_9_58_2 doi: 10.1109/ICCV48922.2021.01408 – ident: e_1_2_9_255_2 – ident: e_1_2_9_104_2 doi: 10.1109/CVPR52688.2022.01807 – ident: e_1_2_9_151_2 – ident: e_1_2_9_62_2 doi: 10.1109/38.656788 – ident: e_1_2_9_254_2 doi: 10.1109/3DV53792.2021.00104 – ident: e_1_2_9_118_2 doi: 10.1609/aaai.v32i1.12278 – ident: e_1_2_9_123_2 doi: 10.1109/ICCV48922.2021.00569 – ident: e_1_2_9_69_2 – ident: e_1_2_9_102_2 doi: 10.1007/978-3-030-01267-0_23 |
SSID | ssj0004765 |
Score | 2.713227 |
Snippet | Synthesizing photo‐realistic images and videos is at the heart of computer graphics and has been the focus of decades of research. Traditionally, synthetic... |
SourceID | proquest crossref wiley |
SourceType | Aggregation Database Enrichment Source Index Database Publisher |
StartPage | 703 |
SubjectTerms | Algorithms Computed tomography Computer graphics Image reconstruction Machine learning Material properties Ray tracing Rendering Representations Synthesis Triangles |
Title | Advances in Neural Rendering |
URI | https://onlinelibrary.wiley.com/doi/abs/10.1111%2Fcgf.14507 https://www.proquest.com/docview/2668624840 |
Volume | 41 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1LSwMxEB5KvejBt1itZREPXrbsI8lu8FRaaxHroVroQVjyFFFWse3FX2-S3W2rKIi3hZ1kM0lm50uY-QbgTOgwTjmVfqR05COGtc-NF_Y1T0WoEMeBu9Af3pLBGF1P8KQGF1UuTMEPsbhws5bh_tfWwBmfrhi5eNTGzLHLJA9jYnnze6MldRRKCK54vS1jTMkqZKN4Fi2_-qIlwFyFqc7P9LfgoRphEV7y3J7PeFt8fCNv_KcK27BZ4k-vU2yYHaipfBc2VlgJ96DZKeICpt5T7lnyDiM_chXnzOt9GPcv77sDvyyi4IvIYDufpJaBJSEEM8wlpVwLioVMJAvM2S6iTKPArFBCA5UorSOD3xDVhMdcEaIkiQ-gnr_m6hA8TC2e4DHTOEAqlSzVBn5oaRqIMJKoAefVdGaiZBi3hS5esuqkYRTOnMINOF2IvhW0Gj8JNas1yUrLmmYGUNicFjN28zk3ub93kHWv-u7h6O-ix7Ae2QwHF9PYhPrsfa5ODO6Y8RasdXrDm7uW22ifmw3SRQ |
linkProvider | Wiley-Blackwell |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV3PS8MwFH6MeVAP_hanU4t48NLRtUnagJcxnFO3HcYGu0hp0kREqeK2i3-9L2k7pyiIt0Jf2rwkr_ny-t73AM6lbgaR4KnrK-27JKHaFbgLu1pEsqmIoJ516PcHrDsmtxM6qcBlmQuT80MsHG7GMuz32hi4cUgvWbl80Gjn1KSSrxAEGvYn7fCTPIqEjJbM3oYzpuAVMnE8i6Zfd6NPiLkMVO1O09mE-7KPeYDJU2M-Ew35_o2-8b9KbMFGAUGdVr5mtqGish1YXyIm3IV6Kw8NmDqPmWP4O1B-aIvO4e09GHeuRu2uW9RRcKWP8M5lkSFhCRmjCRUp50JLTmUapomHxzufJ5p4OEkh91SotPYRwhGumQiEYkylLNiHavaSqQNwKDeQQgSJph5RUZpEGhGITrGBbPopqcFFOZ6xLEjGTa2L57g8bKDCsVW4BmcL0decWeMnoXo5KXFhXNMYMYVJa8G-4-vs6P7-gLh93bEXh38XPYXV7qjfi3s3g7sjWPNNwoMNcaxDdfY2V8cIQ2bixK62DzZ61M8 |
linkToPdf | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV3dS8MwED-GguiD3-J0ahEffOno2iRt8WlM5_waMhzsQSjNl4hSh9te_Ou9pO02RUF8K_SS5pJc75dw9zuAE6EbQcRj6fpK-y5JqXY5emFX80g0FOHUsxf6d13W6ZPrAR1U4KzMhcn5IaYXbsYy7P_aGPhQ6jkjF08azZyaTPJFwtCTGUTUm3FHkZDRktjbUMYUtEImjGfa9KszmiHMeZxqHU17DR7LIebxJS_1yZjXxcc39sZ_6rAOqwUAdZr5jtmAiso2YWWOlnALas08MGDkPGeOYe9A-Z4tOYevt6HfvnhoddyiioIrfAR3LosMBUvIGE0pl3HMtYipkKFMPTzc-XGqiYdLFMaeCpXWPgI4EmvGA64YU5IFO7CQvWVqFxwaG0DBg1RTj6hIppFG_KElNhANX5IqnJbTmYiCYtxUunhNyqMGKpxYhatwPBUd5rwaPwnVyjVJCtMaJYgoTFILjh0_Zyf39w6S1mXbPuz9XfQIlu7P28ntVfdmH5Z9k-1g4xtrsDB-n6gDxCBjfmj32ifZCtN- |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Advances+in+Neural+Rendering&rft.jtitle=Computer+graphics+forum&rft.au=Tewari%2C+A.&rft.au=Thies%2C+J.&rft.au=Mildenhall%2C+B.&rft.au=Srinivasan%2C+P.&rft.date=2022-05-01&rft.issn=0167-7055&rft.eissn=1467-8659&rft.volume=41&rft.issue=2&rft.spage=703&rft.epage=735&rft_id=info:doi/10.1111%2Fcgf.14507&rft.externalDBID=10.1111%252Fcgf.14507&rft.externalDocID=CGF14507 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0167-7055&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0167-7055&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0167-7055&client=summon |