Best bang for your buck: GPU nodes for GROMACS biomolecular simulations
The molecular dynamics simulation package GROMACS runs efficiently on a wide variety of hardware from commodity workstations to high performance computing clusters. Hardware features are well‐exploited with a combination of single instruction multiple data, multithreading, and message passing interf...
Saved in:
| Published in | Journal of computational chemistry Vol. 36; no. 26; pp. 1990 - 2008 |
|---|---|
| Main Authors | , , , , , |
| Format | Journal Article |
| Language | English |
| Published |
United States
Blackwell Publishing Ltd
05.10.2015
Wiley Subscription Services, Inc John Wiley and Sons Inc |
| Subjects | |
| Online Access | Get full text |
| ISSN | 0192-8651 1096-987X 1096-987X |
| DOI | 10.1002/jcc.24030 |
Cover
| Abstract | The molecular dynamics simulation package GROMACS runs efficiently on a wide variety of hardware from commodity workstations to high performance computing clusters. Hardware features are well‐exploited with a combination of single instruction multiple data, multithreading, and message passing interface (MPI)‐based single program multiple data/multiple program multiple data parallelism while graphics processing units (GPUs) can be used as accelerators to compute interactions off‐loaded from the CPU. Here, we evaluate which hardware produces trajectories with GROMACS 4.6 or 5.0 in the most economical way. We have assembled and benchmarked compute nodes with various CPU/GPU combinations to identify optimal compositions in terms of raw trajectory production rate, performance‐to‐price ratio, energy efficiency, and several other criteria. Although hardware prices are naturally subject to trends and fluctuations, general tendencies are clearly visible. Adding any type of GPU significantly boosts a node's simulation performance. For inexpensive consumer‐class GPUs this improvement equally reflects in the performance‐to‐price ratio. Although memory issues in consumer‐class GPUs could pass unnoticed as these cards do not support error checking and correction memory, unreliable GPUs can be sorted out with memory checking tools. Apart from the obvious determinants for cost‐efficiency like hardware expenses and raw performance, the energy consumption of a node is a major cost factor. Over the typical hardware lifetime until replacement of a few years, the costs for electrical power and cooling can become larger than the costs of the hardware itself. Taking that into account, nodes with a well‐balanced ratio of CPU and consumer‐class GPU resources produce the maximum amount of GROMACS trajectory over their lifetime. © 2015 The Authors. Journal of Computational Chemistry Published by Wiley Periodicals, Inc.
Molecular dynamics (MD) simulation is a crucial tool for the study of (bio)molecules. MD simulations typically run for weeks or months even on modern computer clusters. Choosing the optimal hardware for carrying out these simulations can increase the trajectory output twofold or threefold. With GROMACS, the maximum amount of MD trajectory for a fixed budget is produced using nodes with a well‐balanced ratio of CPU and consumer‐class GPU resources. |
|---|---|
| AbstractList | The molecular dynamics simulation package GROMACS runs efficiently on a wide variety of hardware from commodity workstations to high performance computing clusters. Hardware features are well-exploited with a combination of single instruction multiple data, multithreading, and message passing interface (MPI)-based single program multiple data/multiple program multiple data parallelism while graphics processing units (GPUs) can be used as accelerators to compute interactions off-loaded from the CPU. Here, we evaluate which hardware produces trajectories with GROMACS 4.6 or 5.0 in the most economical way. We have assembled and benchmarked compute nodes with various CPU/GPU combinations to identify optimal compositions in terms of raw trajectory production rate, performance-to-price ratio, energy efficiency, and several other criteria. Although hardware prices are naturally subject to trends and fluctuations, general tendencies are clearly visible. Adding any type of GPU significantly boosts a node's simulation performance. For inexpensive consumer-class GPUs this improvement equally reflects in the performance-to-price ratio. Although memory issues in consumer-class GPUs could pass unnoticed as these cards do not support error checking and correction memory, unreliable GPUs can be sorted out with memory checking tools. Apart from the obvious determinants for cost-efficiency like hardware expenses and raw performance, the energy consumption of a node is a major cost factor. Over the typical hardware lifetime until replacement of a few years, the costs for electrical power and cooling can become larger than the costs of the hardware itself. Taking that into account, nodes with a well-balanced ratio of CPU and consumer-class GPU resources produce the maximum amount of GROMACS trajectory over their lifetime. The molecular dynamics simulation package GROMACS runs efficiently on a wide variety of hardware from commodity workstations to high performance computing clusters. Hardware features are well-exploited with a combination of single instruction multiple data, multithreading, and message passing interface (MPI)-based single program multiple data/multiple program multiple data parallelism while graphics processing units (GPUs) can be used as accelerators to compute interactions off-loaded from the CPU. Here, we evaluate which hardware produces trajectories with GROMACS 4.6 or 5.0 in the most economical way. We have assembled and benchmarked compute nodes with various CPU/GPU combinations to identify optimal compositions in terms of raw trajectory production rate, performance-to-price ratio, energy efficiency, and several other criteria. Although hardware prices are naturally subject to trends and fluctuations, general tendencies are clearly visible. Adding any type of GPU significantly boosts a node's simulation performance. For inexpensive consumer-class GPUs this improvement equally reflects in the performance-to-price ratio. Although memory issues in consumer-class GPUs could pass unnoticed as these cards do not support error checking and correction memory, unreliable GPUs can be sorted out with memory checking tools. Apart from the obvious determinants for cost-efficiency like hardware expenses and raw performance, the energy consumption of a node is a major cost factor. Over the typical hardware lifetime until replacement of a few years, the costs for electrical power and cooling can become larger than the costs of the hardware itself. Taking that into account, nodes with a well-balanced ratio of CPU and consumer-class GPU resources produce the maximum amount of GROMACS trajectory over their lifetime.The molecular dynamics simulation package GROMACS runs efficiently on a wide variety of hardware from commodity workstations to high performance computing clusters. Hardware features are well-exploited with a combination of single instruction multiple data, multithreading, and message passing interface (MPI)-based single program multiple data/multiple program multiple data parallelism while graphics processing units (GPUs) can be used as accelerators to compute interactions off-loaded from the CPU. Here, we evaluate which hardware produces trajectories with GROMACS 4.6 or 5.0 in the most economical way. We have assembled and benchmarked compute nodes with various CPU/GPU combinations to identify optimal compositions in terms of raw trajectory production rate, performance-to-price ratio, energy efficiency, and several other criteria. Although hardware prices are naturally subject to trends and fluctuations, general tendencies are clearly visible. Adding any type of GPU significantly boosts a node's simulation performance. For inexpensive consumer-class GPUs this improvement equally reflects in the performance-to-price ratio. Although memory issues in consumer-class GPUs could pass unnoticed as these cards do not support error checking and correction memory, unreliable GPUs can be sorted out with memory checking tools. Apart from the obvious determinants for cost-efficiency like hardware expenses and raw performance, the energy consumption of a node is a major cost factor. Over the typical hardware lifetime until replacement of a few years, the costs for electrical power and cooling can become larger than the costs of the hardware itself. Taking that into account, nodes with a well-balanced ratio of CPU and consumer-class GPU resources produce the maximum amount of GROMACS trajectory over their lifetime. The molecular dynamics simulation package GROMACS runs efficiently on a wide variety of hardware from commodity workstations to high performance computing clusters. Hardware features are well‐exploited with a combination of single instruction multiple data, multithreading, and message passing interface (MPI)‐based single program multiple data/multiple program multiple data parallelism while graphics processing units (GPUs) can be used as accelerators to compute interactions off‐loaded from the CPU. Here, we evaluate which hardware produces trajectories with GROMACS 4.6 or 5.0 in the most economical way. We have assembled and benchmarked compute nodes with various CPU/GPU combinations to identify optimal compositions in terms of raw trajectory production rate, performance‐to‐price ratio, energy efficiency, and several other criteria. Although hardware prices are naturally subject to trends and fluctuations, general tendencies are clearly visible. Adding any type of GPU significantly boosts a node's simulation performance. For inexpensive consumer‐class GPUs this improvement equally reflects in the performance‐to‐price ratio. Although memory issues in consumer‐class GPUs could pass unnoticed as these cards do not support error checking and correction memory, unreliable GPUs can be sorted out with memory checking tools. Apart from the obvious determinants for cost‐efficiency like hardware expenses and raw performance, the energy consumption of a node is a major cost factor. Over the typical hardware lifetime until replacement of a few years, the costs for electrical power and cooling can become larger than the costs of the hardware itself. Taking that into account, nodes with a well‐balanced ratio of CPU and consumer‐class GPU resources produce the maximum amount of GROMACS trajectory over their lifetime. © 2015 The Authors. Journal of Computational Chemistry Published by Wiley Periodicals, Inc. The molecular dynamics simulation package GROMACS runs efficiently on a wide variety of hardware from commodity workstations to high performance computing clusters. Hardware features are well‐exploited with a combination of single instruction multiple data, multithreading, and message passing interface (MPI)‐based single program multiple data/multiple program multiple data parallelism while graphics processing units (GPUs) can be used as accelerators to compute interactions off‐loaded from the CPU. Here, we evaluate which hardware produces trajectories with GROMACS 4.6 or 5.0 in the most economical way. We have assembled and benchmarked compute nodes with various CPU/GPU combinations to identify optimal compositions in terms of raw trajectory production rate, performance‐to‐price ratio, energy efficiency, and several other criteria. Although hardware prices are naturally subject to trends and fluctuations, general tendencies are clearly visible. Adding any type of GPU significantly boosts a node's simulation performance. For inexpensive consumer‐class GPUs this improvement equally reflects in the performance‐to‐price ratio. Although memory issues in consumer‐class GPUs could pass unnoticed as these cards do not support error checking and correction memory, unreliable GPUs can be sorted out with memory checking tools. Apart from the obvious determinants for cost‐efficiency like hardware expenses and raw performance, the energy consumption of a node is a major cost factor. Over the typical hardware lifetime until replacement of a few years, the costs for electrical power and cooling can become larger than the costs of the hardware itself. Taking that into account, nodes with a well‐balanced ratio of CPU and consumer‐class GPU resources produce the maximum amount of GROMACS trajectory over their lifetime. © 2015 The Authors. Journal of Computational Chemistry Published by Wiley Periodicals, Inc. Molecular dynamics (MD) simulation is a crucial tool for the study of (bio)molecules. MD simulations typically run for weeks or months even on modern computer clusters. Choosing the optimal hardware for carrying out these simulations can increase the trajectory output twofold or threefold. With GROMACS, the maximum amount of MD trajectory for a fixed budget is produced using nodes with a well‐balanced ratio of CPU and consumer‐class GPU resources. |
| Author | Páll, Szilárd Grubmüller, Helmut Esztermann, Ansgar Kutzner, Carsten Fechner, Martin de Groot, Bert L. |
| AuthorAffiliation | 1 Theoretical and Computational Biophysics Department Max Planck Institute for Biophysical Chemistry Am Fassberg 11 37077 Göttingen Germany 2 Theoretical and Computational Biophysics KTH Royal Institute of Technology 17121 Stockholm Sweden |
| AuthorAffiliation_xml | – name: 2 Theoretical and Computational Biophysics KTH Royal Institute of Technology 17121 Stockholm Sweden – name: 1 Theoretical and Computational Biophysics Department Max Planck Institute for Biophysical Chemistry Am Fassberg 11 37077 Göttingen Germany |
| Author_xml | – sequence: 1 givenname: Carsten surname: Kutzner fullname: Kutzner, Carsten email: ckutzne@gwdg.de organization: Theoretical and Computational Biophysics Department, Max Planck Institute for Biophysical Chemistry, Am Fassberg 11, 37077, Göttingen, Germany – sequence: 2 givenname: Szilárd surname: Páll fullname: Páll, Szilárd organization: Theoretical and Computational Biophysics, KTH Royal Institute of Technology, Stockholm, 17121, Sweden – sequence: 3 givenname: Martin surname: Fechner fullname: Fechner, Martin organization: Theoretical and Computational Biophysics Department, Max Planck Institute for Biophysical Chemistry, Am Fassberg 11, 37077, Göttingen, Germany – sequence: 4 givenname: Ansgar surname: Esztermann fullname: Esztermann, Ansgar organization: Theoretical and Computational Biophysics Department, Max Planck Institute for Biophysical Chemistry, Am Fassberg 11, 37077, Göttingen, Germany – sequence: 5 givenname: Bert L. surname: de Groot fullname: de Groot, Bert L. organization: Theoretical and Computational Biophysics Department, Max Planck Institute for Biophysical Chemistry, Am Fassberg 11, 37077, Göttingen, Germany – sequence: 6 givenname: Helmut surname: Grubmüller fullname: Grubmüller, Helmut organization: Theoretical and Computational Biophysics Department, Max Planck Institute for Biophysical Chemistry, Am Fassberg 11, 37077, Göttingen, Germany |
| BackLink | https://www.ncbi.nlm.nih.gov/pubmed/26238484$$D View this record in MEDLINE/PubMed https://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-173956$$DView record from Swedish Publication Index |
| BookMark | eNp9kV1v0zAUhi00xLrBBX8AReIGkLL5I1_eBVLJIHwMhihj3FmOc9K5TeJiJ5T-e0xbKlYBV0fyec7ro-ccoYPOdIDQQ4JPCMb0dKbUCY0ww3fQiGCehDxLvx6gESachlkSk0N05NwMY8ziJLqHDmlCWRZl0QgVL8D1QSm7aVAbG6zMYINyUPOzoPh4FXSmArduFJ8u34_zSVBq05oG1NBIGzjd-tpr07n76G4tGwcPtvUYXb16-Tl_HV5cFm_y8UWo4ozjkAOtagU0TSmUteIQVUTJDMcgZe17WRlDkmIAzpKskhJz7t8gUhXgSGaSHaNnm9yhW8jVUjaNWFjdSrsSBItfNoS3IdY2PBxuYLeExVDuSCO1ONdfxsLYqZj3N4KkjMeJ559veA-3UCnoeiubW2O3O52-EVPzXcQ4ogRTH_BkG2DNt8GbFa12CppGdmAG5__BPMUpZalHH--hM---8-48RQjhUUaYpx79udFuld8H9MDpBlDWOGehFkr365P4BXXzVylP9yb-J3CbvtQNrP4Nird5vqdcux5-7CaknYskZWksrj8U4t2EFfH5dSIm7Cf2b9j2 |
| CODEN | JCCHDD |
| CitedBy_id | crossref_primary_10_1007_s42485_019_00020_y crossref_primary_10_2298_JSC231125012B crossref_primary_10_3389_fimmu_2023_1112816 crossref_primary_10_1371_journal_pone_0183889 crossref_primary_10_5334_jors_268 crossref_primary_10_1021_acs_jpca_0c05769 crossref_primary_10_1063_5_0145197 crossref_primary_10_3390_life11050385 crossref_primary_10_1038_s41565_024_01657_7 crossref_primary_10_1002_jcc_26011 crossref_primary_10_1063_5_0250793 crossref_primary_10_1021_acs_jcim_1c00598 crossref_primary_10_1021_acs_jcim_1c01445 crossref_primary_10_7717_peerj_7425 crossref_primary_10_1021_acs_jpcb_7b12546 crossref_primary_10_1016_j_xphs_2020_09_010 crossref_primary_10_1021_acs_jcim_2c00044 crossref_primary_10_1080_07391102_2023_2212815 crossref_primary_10_1021_acs_jpcb_4c01957 crossref_primary_10_1039_D1MD00140J crossref_primary_10_1177_10943420211008288 crossref_primary_10_1016_j_jhazmat_2022_129517 crossref_primary_10_1038_s41467_024_46207_w crossref_primary_10_1021_acs_jctc_3c00777 crossref_primary_10_3390_molecules28247997 crossref_primary_10_1021_acs_langmuir_8b03212 crossref_primary_10_3390_molecules29061338 crossref_primary_10_1021_acs_jctc_5b01022 crossref_primary_10_1080_07391102_2022_2163427 crossref_primary_10_1021_acs_jpcb_9b11753 crossref_primary_10_3390_cryst14060532 crossref_primary_10_1016_j_bpj_2023_01_025 crossref_primary_10_1177_10943420231213013 crossref_primary_10_3389_fmicb_2021_665041 crossref_primary_10_3390_ijms17071083 crossref_primary_10_3390_ph17050549 crossref_primary_10_1002_jcc_25796 crossref_primary_10_1007_s42979_024_02958_3 crossref_primary_10_1021_acssuschemeng_4c05394 crossref_primary_10_1063_5_0018516 crossref_primary_10_1007_s00894_022_05369_4 crossref_primary_10_1063_5_0019045 crossref_primary_10_1557_mrc_2019_54 crossref_primary_10_1002_wcms_1494 crossref_primary_10_1039_D1GC00097G crossref_primary_10_1021_acs_jcim_9b00351 crossref_primary_10_1016_j_mtchem_2018_12_007 crossref_primary_10_4103_pm_pm_99_20 crossref_primary_10_1063_1_4943287 crossref_primary_10_1002_prot_25306 crossref_primary_10_1002_slct_202401597 crossref_primary_10_1016_j_jmgm_2022_108282 crossref_primary_10_1021_acs_jpcb_2c07929 crossref_primary_10_1002_chem_202000495 crossref_primary_10_1021_acs_jctc_0c00290 crossref_primary_10_1021_acs_jpcb_9b06624 crossref_primary_10_1002_wcms_1444 crossref_primary_10_1080_07391102_2023_2252084 crossref_primary_10_1021_jacs_8b10840 crossref_primary_10_1016_j_ymeth_2018_04_010 crossref_primary_10_1021_acsomega_1c07359 crossref_primary_10_1039_D1CP00531F crossref_primary_10_1021_acsami_4c12187 crossref_primary_10_1007_s12268_018_0892_y crossref_primary_10_1021_acsptsci_2c00148 crossref_primary_10_3389_fmolb_2024_1472252 crossref_primary_10_1002_jcc_25228 crossref_primary_10_1016_j_ymeth_2020_02_009 crossref_primary_10_1021_acs_jcim_4c02113 crossref_primary_10_3390_molecules28010413 crossref_primary_10_1146_annurev_biophys_070317_033349 crossref_primary_10_1016_j_cpc_2017_05_003 crossref_primary_10_1063_1674_0068_cjcp2009163 crossref_primary_10_1021_acscatal_1c01062 crossref_primary_10_1038_s41598_017_11736_6 crossref_primary_10_1007_s00894_021_04957_0 crossref_primary_10_1021_acs_jcim_0c01236 crossref_primary_10_1021_acs_jpcb_4c02863 crossref_primary_10_1016_j_molliq_2023_121340 crossref_primary_10_1080_00222348_2021_1945080 crossref_primary_10_1021_acsami_7b18717 crossref_primary_10_3390_membranes11100747 crossref_primary_10_1002_jcc_26545 crossref_primary_10_3389_fmolb_2019_00117 crossref_primary_10_1016_j_bbamem_2016_02_007 crossref_primary_10_1021_acs_jcim_2c01444 crossref_primary_10_1021_acs_jpca_9b10998 crossref_primary_10_3389_fmicb_2024_1402963 crossref_primary_10_1038_s41524_019_0209_9 crossref_primary_10_1080_00319104_2023_2263897 crossref_primary_10_7124_FEEO_v22_942 crossref_primary_10_1080_07391102_2024_2325109 crossref_primary_10_1016_j_synbio_2024_05_008 crossref_primary_10_1016_j_tibs_2020_02_010 crossref_primary_10_1021_acs_jctc_1c00145 crossref_primary_10_1021_acs_jpcb_1c03011 crossref_primary_10_1021_acs_jpcb_2c00353 crossref_primary_10_1016_j_polymer_2020_122881 crossref_primary_10_1007_s00894_023_05558_9 crossref_primary_10_1039_D4MD00869C crossref_primary_10_1016_j_cofs_2016_08_003 crossref_primary_10_1021_acs_jpcb_0c09824 crossref_primary_10_18632_oncotarget_27700 crossref_primary_10_1021_acs_jcim_7b00132 crossref_primary_10_1021_acs_jctc_2c00932 crossref_primary_10_3390_molecules23123269 crossref_primary_10_1016_j_ijbiomac_2024_137059 crossref_primary_10_1073_pnas_1816909116 crossref_primary_10_3389_fpls_2021_732701 crossref_primary_10_1016_j_bbamem_2015_12_032 crossref_primary_10_1016_j_orgel_2019_105571 crossref_primary_10_1155_2022_5314179 crossref_primary_10_1016_j_sbi_2016_06_007 crossref_primary_10_3390_catal8050192 crossref_primary_10_1021_acs_jpcb_2c02406 crossref_primary_10_1021_acs_jpcb_3c03946 crossref_primary_10_1002_mgg3_1344 crossref_primary_10_1080_07391102_2023_2166119 crossref_primary_10_3389_fimmu_2023_1100188 crossref_primary_10_1021_acs_langmuir_9b03086 crossref_primary_10_1063_5_0023460 crossref_primary_10_1016_j_molstruc_2022_133676 crossref_primary_10_1073_pnas_2407479121 crossref_primary_10_1021_acsomega_3c07740 crossref_primary_10_1016_j_heliyon_2024_e40774 crossref_primary_10_1021_acs_jcim_0c01010 crossref_primary_10_1002_jcc_24786 crossref_primary_10_1177_1094342019826667 crossref_primary_10_3390_molecules28196795 crossref_primary_10_1002_1873_3468_15077 crossref_primary_10_1002_cbic_202300373 crossref_primary_10_1021_acs_jctc_9b00450 crossref_primary_10_1021_acs_jpcb_2c04454 crossref_primary_10_1016_j_molliq_2023_122127 crossref_primary_10_1039_C7CP08185E crossref_primary_10_31857_S2308114723700231 crossref_primary_10_3390_microorganisms8010059 crossref_primary_10_1063_1_5058804 crossref_primary_10_1016_j_bbamem_2018_04_013 crossref_primary_10_1063_5_0198777 crossref_primary_10_1016_j_isci_2022_104948 crossref_primary_10_1021_acssuschemeng_4c04974 crossref_primary_10_1021_acs_jctc_7b01175 crossref_primary_10_1134_S1811238223700285 crossref_primary_10_1063_5_0014500 crossref_primary_10_1002_mats_202100066 crossref_primary_10_1158_1541_7786_MCR_20_1017 crossref_primary_10_1159_000503450 crossref_primary_10_1002_jcc_25823 crossref_primary_10_1021_acs_jcim_2c01596 crossref_primary_10_1016_j_molliq_2020_112870 crossref_primary_10_1021_acs_jcim_4c01100 crossref_primary_10_1021_acs_langmuir_0c02777 crossref_primary_10_4236_ajmb_2021_111001 crossref_primary_10_1021_acs_jpcb_1c07769 crossref_primary_10_1002_cpe_5136 crossref_primary_10_1016_j_bpj_2020_07_027 crossref_primary_10_1093_carcin_bgaa091 crossref_primary_10_1016_j_jbc_2021_101271 crossref_primary_10_1146_annurev_physchem_061020_053438 crossref_primary_10_3389_fchem_2023_1103792 crossref_primary_10_1016_j_biochi_2024_03_004 crossref_primary_10_1039_D3CP01770B crossref_primary_10_1016_j_cpc_2018_10_018 crossref_primary_10_1021_acs_jpclett_0c02785 crossref_primary_10_1021_acs_jcim_1c00286 crossref_primary_10_1021_acs_jcim_6b00498 crossref_primary_10_1063_5_0133966 crossref_primary_10_3390_ijms25094918 crossref_primary_10_1007_s00894_018_3720_x crossref_primary_10_1021_acs_jctc_9b00630 crossref_primary_10_1016_j_bpj_2017_02_016 crossref_primary_10_1039_D0CP05407K crossref_primary_10_1080_23746149_2024_2358196 crossref_primary_10_1021_acs_jpcb_3c03411 crossref_primary_10_1021_acs_jcim_1c00169 crossref_primary_10_1021_acs_jcim_8b00108 crossref_primary_10_3390_cells10051052 |
| Cites_doi | 10.1002/jcc.21645 10.1021/ct9000685 10.1016/j.cpc.2011.10.012 10.1002/jcc.20289 10.1016/j.cpc.2013.06.003 10.1002/jcc.21773 10.1038/nsmb.2690 10.1063/1.470117 10.1021/ct700301q 10.1002/wcms.1121 10.1093/bioinformatics/btt055 10.1002/jcc.21287 10.1021/ct400140n |
| ContentType | Journal Article |
| Copyright | 2015 The Authors. Journal of Computational Chemistry Published by Wiley Periodicals, Inc. Copyright Wiley Subscription Services, Inc. Oct 5, 2015 |
| Copyright_xml | – notice: 2015 The Authors. Journal of Computational Chemistry Published by Wiley Periodicals, Inc. – notice: Copyright Wiley Subscription Services, Inc. Oct 5, 2015 |
| DBID | BSCLL 24P AAYXX CITATION CGR CUY CVF ECM EIF NPM JQ2 7X8 5PM ADTPV AFDQA AOWAS D8T D8V ZZAVC ADTOC UNPAY |
| DOI | 10.1002/jcc.24030 |
| DatabaseName | Istex Wiley Online Library Open Access CrossRef Medline MEDLINE MEDLINE (Ovid) MEDLINE MEDLINE PubMed ProQuest Computer Science Collection MEDLINE - Academic PubMed Central (Full Participant titles) SwePub SWEPUB Kungliga Tekniska Högskolan full text SwePub Articles SWEPUB Freely available online SWEPUB Kungliga Tekniska Högskolan SwePub Articles full text Unpaywall for CDI: Periodical Content Unpaywall |
| DatabaseTitle | CrossRef MEDLINE Medline Complete MEDLINE with Full Text PubMed MEDLINE (Ovid) ProQuest Computer Science Collection MEDLINE - Academic |
| DatabaseTitleList | MEDLINE - Academic ProQuest Computer Science Collection MEDLINE |
| Database_xml | – sequence: 1 dbid: 24P name: Wiley Online Library Open Access url: https://authorservices.wiley.com/open-science/open-access/browse-journals.html sourceTypes: Publisher – sequence: 2 dbid: NPM name: PubMed url: https://proxy.k.utb.cz/login?url=http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed sourceTypes: Index Database – sequence: 3 dbid: EIF name: MEDLINE url: https://proxy.k.utb.cz/login?url=https://www.webofscience.com/wos/medline/basic-search sourceTypes: Index Database – sequence: 4 dbid: UNPAY name: Unpaywall url: https://proxy.k.utb.cz/login?url=https://unpaywall.org/ sourceTypes: Open Access Repository |
| DeliveryMethod | fulltext_linktorsrc |
| Discipline | Chemistry |
| EISSN | 1096-987X |
| EndPage | 2008 |
| ExternalDocumentID | 10.1002/jcc.24030 oai_DiVA_org_kth_173956 PMC5042102 3804531451 26238484 10_1002_jcc_24030 JCC24030 ark_67375_WNG_KS3G5DW6_S |
| Genre | news Research Support, Non-U.S. Gov't Journal Article Feature |
| GrantInformation_xml | – fundername: DFG priority programme “Software for Exascale Computing” (SPP 1648) |
| GroupedDBID | --- -~X .3N .GA 05W 0R~ 10A 1L6 1OB 1OC 1ZS 33P 36B 3SF 3WU 4.4 4ZD 50Y 50Z 51W 51X 52M 52N 52O 52P 52S 52T 52U 52W 52X 53G 5GY 5VS 66C 6P2 702 7PT 8-0 8-1 8-3 8-4 8-5 8UM 930 A03 AAESR AAEVG AAHQN AAMMB AAMNL AANHP AANLZ AAONW AASGY AAXRX AAYCA AAZKR ABCQN ABCUV ABIJN ABJNI ABLJU ABPVW ACAHQ ACBWZ ACCZN ACFBH ACGFO ACGFS ACIWK ACNCT ACPOU ACRPL ACXBN ACXQS ACYXJ ADBBV ADEOM ADIZJ ADKYN ADMGS ADMLS ADNMO ADOZA ADXAS ADZMN AEFGJ AEGXH AEIGN AEIMD AENEX AEUYR AEYWJ AFBPY AFFPM AFGKR AFWVQ AFZJQ AGQPQ AGXDD AGYGG AHBTC AIAGR AIDQK AIDYY AITYG AIURR AJXKR ALAGY ALMA_UNASSIGNED_HOLDINGS ALUQN ALVPJ AMBMR AMYDB ATUGU AUFTA AZBYB AZVAB BAFTC BFHJK BHBCM BMNLL BMXJE BNHUX BROTX BRXPI BSCLL BY8 CS3 D-E D-F DCZOG DPXWK DR1 DR2 DRFUL DRSTM DU5 EBS EJD F00 F01 F04 F5P G-S G.N GNP GODZA H.T H.X HBH HGLYW HHY HHZ HZ~ IX1 J0M JPC KQQ LATKE LAW LC2 LC3 LEEKS LH4 LITHE LOXES LP6 LP7 LUTES LW6 LYRES MEWTI MK4 MRFUL MRSTM MSFUL MSSTM MXFUL MXSTM N04 N05 N9A NF~ NNB O66 O9- OIG P2P P2W P2X P4D PQQKQ Q.N Q11 QB0 QRW R.K RNS ROL RX1 RYL SUPJJ TN5 UB1 UPT V2E V8K W8V W99 WBFHL WBKPD WH7 WIB WIH WIK WJL WOHZO WQJ WXSBR WYISQ XG1 XPP XV2 YQT ZZTAW ~IA ~KM ~WT 24P AAHHS ACCFJ AEEZP AEQDE AEUQT AFPWT AIWBW AJBDE ESX RWI RWK WRC AAYXX CITATION CGR CUY CVF ECM EIF NPM JQ2 7X8 5PM .Y3 186 31~ 6TJ ABDPE ABEFU ABEML ACSCC ADTPV AFDQA AFFNX AGHNM AI. AOWAS ASPBG AVWKF AZFZN BDRZF BTSUX D8T D8V FEDTE HF~ HVGLF M21 PALCI RIWAO RJQFR SAMSI VH1 ZCG ZY4 ZZAVC ADTOC AIQQE UNPAY |
| ID | FETCH-LOGICAL-c5890-9e2dfce2772ebfc9e4d1ca805eaafe2d8b5e670ee9368daa099d8be4cde04a8a3 |
| IEDL.DBID | UNPAY |
| ISSN | 0192-8651 1096-987X |
| IngestDate | Sun Oct 26 04:14:49 EDT 2025 Thu Aug 21 06:35:23 EDT 2025 Thu Aug 21 14:13:18 EDT 2025 Fri Jul 11 12:26:14 EDT 2025 Fri Jul 25 19:04:58 EDT 2025 Thu Apr 03 06:52:59 EDT 2025 Wed Oct 01 02:49:01 EDT 2025 Thu Apr 24 23:10:25 EDT 2025 Wed Jan 22 16:55:22 EST 2025 Sun Sep 21 06:20:21 EDT 2025 |
| IsDoiOpenAccess | true |
| IsOpenAccess | true |
| IsPeerReviewed | true |
| IsScholarly | true |
| Issue | 26 |
| Keywords | MD energy efficiency hybrid parallelization molecular dynamics GPU parallel computing benchmark |
| Language | English |
| License | Attribution http://creativecommons.org/licenses/by/4.0 http://doi.wiley.com/10.1002/tdm_license_1.1 2015 The Authors. Journal of Computational Chemistry Published by Wiley Periodicals, Inc. This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited. cc-by |
| LinkModel | DirectLink |
| MergedId | FETCHMERGED-LOGICAL-c5890-9e2dfce2772ebfc9e4d1ca805eaafe2d8b5e670ee9368daa099d8be4cde04a8a3 |
| Notes | istex:78A0AAF823C22979C9FD50D60A1B920B33161192 ark:/67375/WNG-KS3G5DW6-S ArticleID:JCC24030 DFG priority programme "Software for Exascale Computing" (SPP 1648) SourceType-Scholarly Journals-1 ObjectType-Feature-1 content type line 14 ObjectType-Article-1 ObjectType-Feature-2 content type line 23 |
| OpenAccessLink | https://proxy.k.utb.cz/login?url=https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/jcc.24030 |
| PMID | 26238484 |
| PQID | 1711194813 |
| PQPubID | 48816 |
| PageCount | 19 |
| ParticipantIDs | unpaywall_primary_10_1002_jcc_24030 swepub_primary_oai_DiVA_org_kth_173956 pubmedcentral_primary_oai_pubmedcentral_nih_gov_5042102 proquest_miscellaneous_1709707237 proquest_journals_1711194813 pubmed_primary_26238484 crossref_citationtrail_10_1002_jcc_24030 crossref_primary_10_1002_jcc_24030 wiley_primary_10_1002_jcc_24030_JCC24030 istex_primary_ark_67375_WNG_KS3G5DW6_S |
| ProviderPackageCode | CITATION AAYXX |
| PublicationCentury | 2000 |
| PublicationDate | October 5, 2015 |
| PublicationDateYYYYMMDD | 2015-10-05 |
| PublicationDate_xml | – month: 10 year: 2015 text: October 5, 2015 day: 05 |
| PublicationDecade | 2010 |
| PublicationPlace | United States |
| PublicationPlace_xml | – name: United States – name: New York – name: Hoboken |
| PublicationTitle | Journal of computational chemistry |
| PublicationTitleAlternate | J. Comput. Chem |
| PublicationYear | 2015 |
| Publisher | Blackwell Publishing Ltd Wiley Subscription Services, Inc John Wiley and Sons Inc |
| Publisher_xml | – name: Blackwell Publishing Ltd – name: Wiley Subscription Services, Inc – name: John Wiley and Sons Inc |
| References | W. M. Brown, A. Kohlmeyer, S. J. Plimpton, A. N. Tharrington, Comput. Phys. Commun. 2012, 183, 449. R. C. Walker, R. M. Betz, In XSEDE '13 Proceedings of the Conference on Extreme Science and Engineering Discovery Environment: Gateway to Discovery; ACM: New York, NY, 2013. J. C. Phillips, R. Braun, W. Wang, J. Gumbart, E. Tajkhorshid, E. Villa, C. Chipot, R. D. Skeel, L. Kale, K. Schulten, J. Comput. Chem. 2005, 26, 1781. B. Hess, C. Kutzner, D. van der Spoel, E. Lindahl, J. Chem. Theory Comput. 2008, 4, 435. L. Bock, C. Blau, G. Schröder, I. Davydov, N. Fischer, H. Stark, M. Rodnina, A. Vaiana, H. Grubmüller, Nat. Struct. Mol. Biol. 2013, 20, 1390. U. Essmann, L. Perera, M. Berkowitz, T. Darden, H. Lee, J. Chem. Phys. 1995, 103, 8577. C. C. Gruber, J. Pleiss, J. Comput. Chem. 2010, 32, 600. B. R. Brooks, C. L. Brooks, III, A. D. Mackerell, Jr., L. Nilsson, R. J. Petrella, B. Roux, Y. Won, G. Archontis, C. Bartels, S. Boresch, A. Caflisch, L. Caves, Q. Cui, A. R. Dinner, M. Feig, S. Fischer, J. Gao, M. Hodoscek, W. Im, K. Kuczera, T. Lazaridis, J. Ma, V. Ovchinnikov, E. Paci, R.W. Pastor, C. B. Post, J. Z. Pu, M. Schaefer, B. Tidor, R. M. Venable, H. L. Woodcock, X. Wu, W. Yang, D. M. York, M. Karplus, J. Comput. Chem. 2009, 30, 1545. G. Shi, J. Enos, M. Showerman, V. Kindratenko, In 2009 Symposium on Application Accelerators in High Performance Computing (SAAHPC'09), University of Illinois at Urbana-Champaign: Urbana, IL, 2009. M. J. Abraham, J. E. Gready, J. Comput. Chem. 2011, 32, 2031. R. Salomon-Ferrer, D. Case, R. Walker, WIREs Comput. Mol. Sci. 2013, 3, 198. S. Pronk, S. Páll, R. Schulz, P. Larsson, P. Bjelkmar, R. Apostolov, M. Shirts, J. Smith, P. Kasson, D. van der Spoel, B. Hess, E. Lindahl, Bioinformatics 2013, 29, 845. S. Páll, B. Hess, Comput. Phys. Commun. 2013, 184, 2641. C. Kutzner, R. Apostolov, B. Hess, H. Grubmüller, In Parallel Computing: Accelerating Computational Science and Engineering (CSE); M. Bader, A. Bode, H. J. Bungartz, Eds.; IOS Press: Amsterdam/Netherlands, 2014; pp. 722-730. I. S. Haque, V. S. Pande, In 10th IEEE/ACM International Conference on Cluster, Cloud and Grid Computing; Stanford University: Stanford, CA, 2010. C. L. Wennberg, T. Murtola, B. Hess, E. Lindahl, J. Chem. Theory Comput. 2013, 9, 3527. S. Páll, M. J. Abraham, C. Kutzner, B. Hess, E. Lindahl. In International Conference on Exascale Applications and Software, EASC 2014, Stockholm, Sweden; S. Markidis, E. Laure, Eds.; Springer International Publishing: Switzerland, 2015; pp. 1-25. M. J. Abraham, T. Murtola, R. Schulz, S. Páll, J. C. Smith, B. Hess, E. Lindahl, SoftwareX 2015. M. Harvey, G. Giupponi, G. D. Fabritiis, J. Chem. Theory Comput. 2009, 5, 1632. 2012; 183 2013; 29 2010; 32 2009; 30 2013; 3 2010 2013; 20 2009 2006 2011; 32 2013; 184 2015 1995; 103 2009; 5 2014 2008; 4 2013 2005; 26 2013; 9 e_1_2_8_17_1 Shi G. (e_1_2_8_18_1) 2009 e_1_2_8_13_1 Haque I. S. (e_1_2_8_20_1) 2010 e_1_2_8_15_1 e_1_2_8_16_1 Páll S. (e_1_2_8_10_1) 2015 Walker R. C. (e_1_2_8_19_1) 2013 e_1_2_8_3_1 e_1_2_8_2_1 Kutzner C. (e_1_2_8_14_1) 2014 e_1_2_8_5_1 e_1_2_8_4_1 e_1_2_8_7_1 e_1_2_8_6_1 e_1_2_8_9_1 e_1_2_8_8_1 Abraham M. J. (e_1_2_8_11_1) 2015 e_1_2_8_21_1 e_1_2_8_22_1 e_1_2_8_12_1 e_1_2_8_1_1 |
| References_xml | – reference: U. Essmann, L. Perera, M. Berkowitz, T. Darden, H. Lee, J. Chem. Phys. 1995, 103, 8577. – reference: R. C. Walker, R. M. Betz, In XSEDE '13 Proceedings of the Conference on Extreme Science and Engineering Discovery Environment: Gateway to Discovery; ACM: New York, NY, 2013. – reference: M. J. Abraham, J. E. Gready, J. Comput. Chem. 2011, 32, 2031. – reference: S. Pronk, S. Páll, R. Schulz, P. Larsson, P. Bjelkmar, R. Apostolov, M. Shirts, J. Smith, P. Kasson, D. van der Spoel, B. Hess, E. Lindahl, Bioinformatics 2013, 29, 845. – reference: C. Kutzner, R. Apostolov, B. Hess, H. Grubmüller, In Parallel Computing: Accelerating Computational Science and Engineering (CSE); M. Bader, A. Bode, H. J. Bungartz, Eds.; IOS Press: Amsterdam/Netherlands, 2014; pp. 722-730. – reference: S. Páll, B. Hess, Comput. Phys. Commun. 2013, 184, 2641. – reference: S. Páll, M. J. Abraham, C. Kutzner, B. Hess, E. Lindahl. In International Conference on Exascale Applications and Software, EASC 2014, Stockholm, Sweden; S. Markidis, E. Laure, Eds.; Springer International Publishing: Switzerland, 2015; pp. 1-25. – reference: M. Harvey, G. Giupponi, G. D. Fabritiis, J. Chem. Theory Comput. 2009, 5, 1632. – reference: B. Hess, C. Kutzner, D. van der Spoel, E. Lindahl, J. Chem. Theory Comput. 2008, 4, 435. – reference: G. Shi, J. Enos, M. Showerman, V. Kindratenko, In 2009 Symposium on Application Accelerators in High Performance Computing (SAAHPC'09), University of Illinois at Urbana-Champaign: Urbana, IL, 2009. – reference: W. M. Brown, A. Kohlmeyer, S. J. Plimpton, A. N. Tharrington, Comput. Phys. Commun. 2012, 183, 449. – reference: I. S. Haque, V. S. Pande, In 10th IEEE/ACM International Conference on Cluster, Cloud and Grid Computing; Stanford University: Stanford, CA, 2010. – reference: J. C. Phillips, R. Braun, W. Wang, J. Gumbart, E. Tajkhorshid, E. Villa, C. Chipot, R. D. Skeel, L. Kale, K. Schulten, J. Comput. Chem. 2005, 26, 1781. – reference: R. Salomon-Ferrer, D. Case, R. Walker, WIREs Comput. Mol. Sci. 2013, 3, 198. – reference: C. L. Wennberg, T. Murtola, B. Hess, E. Lindahl, J. Chem. Theory Comput. 2013, 9, 3527. – reference: C. C. Gruber, J. Pleiss, J. Comput. Chem. 2010, 32, 600. – reference: B. R. Brooks, C. L. Brooks, III, A. D. Mackerell, Jr., L. Nilsson, R. J. Petrella, B. Roux, Y. Won, G. Archontis, C. Bartels, S. Boresch, A. Caflisch, L. Caves, Q. Cui, A. R. Dinner, M. Feig, S. Fischer, J. Gao, M. Hodoscek, W. Im, K. Kuczera, T. Lazaridis, J. Ma, V. Ovchinnikov, E. Paci, R.W. Pastor, C. B. Post, J. Z. Pu, M. Schaefer, B. Tidor, R. M. Venable, H. L. Woodcock, X. Wu, W. Yang, D. M. York, M. Karplus, J. Comput. Chem. 2009, 30, 1545. – reference: L. Bock, C. Blau, G. Schröder, I. Davydov, N. Fischer, H. Stark, M. Rodnina, A. Vaiana, H. Grubmüller, Nat. Struct. Mol. Biol. 2013, 20, 1390. – reference: M. J. Abraham, T. Murtola, R. Schulz, S. Páll, J. C. Smith, B. Hess, E. Lindahl, SoftwareX 2015. – start-page: 722 year: 2014 end-page: 730 – volume: 184 start-page: 2641 year: 2013 publication-title: Comput. Phys. Commun. – volume: 9 start-page: 3527 year: 2013 publication-title: J. Chem. Theory Comput. – year: 2009 – volume: 20 start-page: 1390 year: 2013 publication-title: Nat. Struct. Mol. Biol. – volume: 29 start-page: 845 year: 2013 publication-title: Bioinformatics – volume: 4 start-page: 435 year: 2008 publication-title: J. Chem. Theory Comput. – volume: 26 start-page: 1781 year: 2005 publication-title: J. Comput. Chem. – year: 2006 – volume: 32 start-page: 600 year: 2010 publication-title: J. Comput. Chem. – year: 2015 publication-title: SoftwareX – volume: 183 start-page: 449 year: 2012 publication-title: Comput. Phys. Commun. – volume: 30 start-page: 1545 year: 2009 publication-title: J. Comput. Chem. – volume: 103 start-page: 8577 year: 1995 publication-title: J. Chem. Phys. – start-page: 1 year: 2015 end-page: 25 – volume: 32 start-page: 2031 year: 2011 publication-title: J. Comput. Chem. – volume: 5 start-page: 1632 year: 2009 publication-title: J. Chem. Theory Comput. – volume: 3 start-page: 198 year: 2013 publication-title: WIREs Comput. Mol. Sci. – year: 2014 – year: 2010 – year: 2013 – start-page: 722 volume-title: In Parallel Computing: Accelerating Computational Science and Engineering (CSE) year: 2014 ident: e_1_2_8_14_1 – ident: e_1_2_8_17_1 doi: 10.1002/jcc.21645 – ident: e_1_2_8_5_1 doi: 10.1021/ct9000685 – ident: e_1_2_8_4_1 doi: 10.1016/j.cpc.2011.10.012 – ident: e_1_2_8_6_1 doi: 10.1002/jcc.20289 – year: 2015 ident: e_1_2_8_11_1 publication-title: SoftwareX – ident: e_1_2_8_3_1 – ident: e_1_2_8_22_1 – ident: e_1_2_8_9_1 doi: 10.1016/j.cpc.2013.06.003 – volume-title: In 2009 Symposium on Application Accelerators in High Performance Computing (SAAHPC'09) year: 2009 ident: e_1_2_8_18_1 – ident: e_1_2_8_21_1 doi: 10.1002/jcc.21773 – ident: e_1_2_8_16_1 doi: 10.1038/nsmb.2690 – volume-title: 10th IEEE/ACM International Conference on Cluster, Cloud and Grid Computing year: 2010 ident: e_1_2_8_20_1 – ident: e_1_2_8_12_1 – ident: e_1_2_8_13_1 doi: 10.1063/1.470117 – start-page: 1 volume-title: In International Conference on Exascale Applications and Software, EASC 2014, Stockholm year: 2015 ident: e_1_2_8_10_1 – ident: e_1_2_8_7_1 doi: 10.1021/ct700301q – volume-title: In XSEDE ‘13 Proceedings of the Conference on Extreme Science and Engineering Discovery Environment: Gateway to Discovery year: 2013 ident: e_1_2_8_19_1 – ident: e_1_2_8_2_1 doi: 10.1002/wcms.1121 – ident: e_1_2_8_8_1 doi: 10.1093/bioinformatics/btt055 – ident: e_1_2_8_1_1 doi: 10.1002/jcc.21287 – ident: e_1_2_8_15_1 doi: 10.1021/ct400140n |
| SSID | ssj0003564 |
| Score | 2.5738513 |
| SecondaryResourceType | review_article |
| Snippet | The molecular dynamics simulation package GROMACS runs efficiently on a wide variety of hardware from commodity workstations to high performance computing... |
| SourceID | unpaywall swepub pubmedcentral proquest pubmed crossref wiley istex |
| SourceType | Open Access Repository Aggregation Database Index Database Enrichment Source Publisher |
| StartPage | 1990 |
| SubjectTerms | Analytical chemistry benchmark Benchmarking Benchmarks Central processing units Computer Simulation CPUs energy efficiency GPU hybrid parallelization Molecular chemistry molecular dynamics Molecular Dynamics Simulation parallel computing Simulation Software Software and Updates |
| SummonAdditionalLinks | – databaseName: Wiley Online Library Open Access dbid: 24P link: http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1Jb9QwFLaq9lA4IHZCCwqLUC-hGTtOHDgNKU1V1FIxDO3N8pZ2mGmmmswI-u95dhYUURC3KH6Wrbf5c2J_D6HXAjYVVEkdaBXpABC4CZiMaABLm0yoKgriDo8fHccH4-jwjJ6tofftXZiaH6L74GYjw-VrG-BCVru_SUO_K_XWksnBfn1jADjGujeOTro0TGjNHQUQJmAxHbS0QiHe7br2FqMNq9efNyHNPw9MNrSit9HmqrwS1z_EbNZHt2552r-L7jS40h_WjnAPrZnyPtrM2nJuD1D-AcbypSjPfcCp_jV08OVKTd_5-cnYL-faVK4h__L5aJiNfHsvvy2d61eTy6bMV_UQjfc_fs0OgqaKQqAoS8MgNVgXymCA0UYWKjWRHijBQmqEKKCNSWriJDQmJTHTQgBkhHcmUtqEkWCCPELr5bw0T5BPwwLbilW6YCwiBjNNNNM4LAiTMsaFh3ZadXLVUIzbShczXpMjYw6a507zHnrZiV7VvBo3Cb1xNukkxGJqD6IllJ8e5_zTiOR07zTmIw9tt0bjTQxWfJBAHrdkNMRDL7pm0Lr9JSJKM19ZmTBNwgSTxEOPaxt3g2FAhixikYeSnvU7AcvM3W8pJxeOoZtCKgTkBvOv_aTXZW_ybcjni3M-XV7ABAjsUD30qvOjf6ljx3nY3yX4YZa5h6f_L7qFbgEIdHS0Id1G68vFyjwDoLWUz11A_QKzHCLo priority: 102 providerName: Wiley-Blackwell |
| Title | Best bang for your buck: GPU nodes for GROMACS biomolecular simulations |
| URI | https://api.istex.fr/ark:/67375/WNG-KS3G5DW6-S/fulltext.pdf https://onlinelibrary.wiley.com/doi/abs/10.1002%2Fjcc.24030 https://www.ncbi.nlm.nih.gov/pubmed/26238484 https://www.proquest.com/docview/1711194813 https://www.proquest.com/docview/1709707237 https://pubmed.ncbi.nlm.nih.gov/PMC5042102 https://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-173956 https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/jcc.24030 |
| UnpaywallVersion | publishedVersion |
| Volume | 36 |
| hasFullText | 1 |
| inHoldings | 1 |
| isFullTextHit | |
| isPrint | |
| journalDatabaseRights | – providerCode: PRVEBS databaseName: Inspec with Full Text customDbUrl: eissn: 1096-987X dateEnd: 20241103 omitProxy: false ssIdentifier: ssj0003564 issn: 0192-8651 databaseCode: ADMLS dateStart: 20050601 isFulltext: true titleUrlDefault: https://www.ebsco.com/products/research-databases/inspec-full-text providerName: EBSCOhost – providerCode: PRVWIB databaseName: Wiley Online Library - Core collection (SURFmarket) issn: 0192-8651 databaseCode: DR2 dateStart: 19960101 customDbUrl: isFulltext: true eissn: 1096-987X dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0003564 providerName: Wiley-Blackwell |
| link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV1bb9MwFD6C9mHwwP0SGFO4CO0lJXXixOWttKzT0Eq10a08Gcd2ttKSVr0Ixq_n2LmIwkBIvERRfJzYx-fYn2P7OwAvBE4qqEyUp2SoPETg2mNJSD0c2pKYyjQN7Obxw360PwwPRnRUxDk1Z2Fyfojqh5vxDNtfGwefqzTv54vVffLqs5QNQyiHc_Z6RBGM16A-7A_aH_ND0ujrkQ3A2ESg7uHselRyC_2cd2NEqhvlfrsMbv6-a7LgFr0OW-tsLi6-iul0E-LaMWrvJnwqa5dvTZk01qukIb__Qvz4H9W_BTcK_Oq2c4O7DVd0dge2OmXYuLvQe4PVcRORnbmIh90LzOAmazl57fYGQzebKb20Cb2j94ftzrFrzv-XIXrd5fhLEU5seQ-Ge28_dPa9IlqDJylr-V5LE5VKTRCu6ySVLR2qphTMp1qIFNNYQnUU-1q3gogpIRCa4jMdSqX9UDAR3IdaNsv0Q3CpnxITGUuljIWBJkwFiinipwFLkoikDuyWLcZlQWVuImpMeU7CTDhqhlvNOPCsEp3n_B2XCb20zV5JiMXEbHiLKT_t9_i746BHu6cRP3Zgu7QLXvj6kjdjHC8M6U3gwNMqGbVull5EpmdrI-O3Yj8mQezAg9yMqo8RRKAsZKED8YaBVQKGAXwzJRufWyZwil0uIkQsf26KG1m645M2ny3O-GR1jgUIcCbswPPKVP-mjl1reX-W4Aedjr159E8vfAzXEGdaxlufbkNttVjrJ4jlVskOXCXhAK_dI7JT-O4Pl0lJVw |
| linkProvider | Unpaywall |
| linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1bb9MwFLam7aHwgLgTGBAuQnsJS31JHMRL6VjKtpaJrmxvlmM7W2lJp14E-_ccOxdUMRBvUXwsR-fmz479HYReS1hUMJXpQCuqA0DgJuAZZQFMbVnMVJ4Td3i8P4h6I3pwxs420Pv6LkzJD9FsuNnIcPnaBrjdkN79zRr6Tam3lk0OFuxbNGpHdumF6XGThwkryaMAwwQ8Yu2aVyjEu03Xtdloyyr253VQ888TkxWv6E3UWhWX8uqHnE7X4a2bn_Zvo1sVsPQ7pSfcQRumuIta3bqe2z2UfoCx_EwW5z4AVf8KOvjZSk3e-enxyC9m2ixcQ_rlc7_THfr2Yn5dO9dfjL9Xdb4W99Fo_-NJtxdUZRQCxXgSBonBOlcGA442Wa4SQ3VbSR4yI2UObTxjJopDYxIScS0lYEZ4Z6jSJqSSS_IAbRazwjxCPgtzbEtW6ZxzSgzmmmiucZgTnmURzj20U6tTqIpj3Ja6mIqSHRkL0LxwmvfQy0b0siTWuE7ojbNJIyHnE3sSLWbidJCKwyFJ2d5pJIYe2q6NJqogXIh2DIncstEQD71omkHr9p-ILMxsZWXCJA5jTGIPPSxt3AyGARpyyqmH4jXrNwKWmnu9pRhfOIpuBrkQoBt8f-kna132xl87YjY_F5PlBXwAgSWqh141fvQvdew4D_u7hDjodt3D4_8XfY5avZP-kTj6NDh8gm4AInTctCHbRpvL-co8BdS1zJ654PoFW54mVA |
| linkToPdf | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1bb9MwFLamTWLsgTssMCBchPaSLo3jxEG8lJZ2bKxM68r2gizfspWWtOpFMH49x85lKgyEeItyjpX4-Bz7c3L8HYRecthUECmUp2SoPEDg2qMiJB4sbSImMk2xTR4_6Ea7_XDvlJyuoDflWZicH6L64GYiw87XJsD1RKU7l6yhX6SsGTY52LCvhSShJqGvdXRJHoVJTh4FGMajEamXvEJ-sFM1XVqN1oxhv18FNX_PmCx4RTfQ-iKb8ItvfDRahrd2fWrfRJ_LnuVpKcPaYi5q8scvpI__2_Vb6EYBXN1G7mm30YrO7qD1Zlkv7i7qvIW-uIJnZy4AYfcCGrhiIYev3c5h383GSs-soHP08aDR7Lnm4H9Zm9edDb4WdcRm91C__e64uesVZRo8SWjie4kOVCp1ADhdi1QmOlR1yalPNOcpyKggOop9rRMcUcU5YFK4p0OptB9yyvF9tJqNM72JXOKngSmJpVJKQ6wDqrCiKvBTTIWIgtRB2-VwMVlwmJtSGiOWsy8HDCzDrGUc9LxSneTEHVcpvbJjXmnw6dBkusWEnXQ7bL-HO6R1ErGeg7ZKp2BFkM9YPYaFwrDdYAc9q8RgdfPPhWd6vDA6fhL7cYBjBz3Ifah6WADQk4Y0dFC85F2VgqH-XpZkg3NLAU5grgVoCO-f--FSk9bgU4ONp2dsOD-HF8CwBXbQi8pP_2aObet2f9Zge82mvXj476pP0bXDVpt9eN_df4SuA-C01Lc-2UKr8-lCPwZQNxdPbOz-BPpBRnU |
| linkToUnpaywall | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV3db9MwED-h9mHwML5ZYKDwIbSXlDSJE5e30rFOQysTpaw8GX9lKy1p1TaC8ddzdj5EYSAk3qL4nNiXO_t3sf07gGccgwoihfKUjJSHCFx7VETEw6lNJESmaWg3jx8P4sNRdDQm4zLPqTkLU_BD1D_cjGfY8do4-EKlxThfru4HLz5L2TKEchizN2OCYLwBzdHgpPuxOCSNvh7bBIxtBOoeRtfjilvo57obM1LTKPfbZXDz912TJbfoNdjKswW_-Mpns02Ia-eog-vwqepdsTVl2srXoiW__0L8-B_dvwHbJX51u4XB3YQrOrsFW70qbdxt6L_C7riCZ2cu4mH3Aiu4IpfTl27_ZORmc6VXtqD_7u1xtzd0zfn_KkWvu5p8KdOJre7A6OD1-96hV2Zr8CShHd_r6EClUgcI17VIZUdHqi059YnmPMUyKoiOE1_rThhTxTlCU7ynI6m0H3HKw7vQyOaZ3gGX-GlgMmOplNIo1AFVoaIq8NOQChEHqQN71RdjsqQyNxk1ZqwgYQ4YaoZZzTjwpBZdFPwdlwk9t5-9luDLqdnwlhB2OuizN8OwT_ZPYzZ0YLeyC1b6-oq1E5wvDOlN6MDjuhi1bpZeeKbnuZHxO4mfBGHiwL3CjOqXBYhAaUQjB5INA6sFDAP4Zkk2ObdM4ASHXESI2P7CFDeq7E8-dNl8ecam63NsQIiRsANPa1P9mzr2rOX9WYId9Xr24v4_PfABXEWcaRlvfbILjfUy1w8Ry63Fo9JffwA4zkeX |
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Best+bang+for+your+buck%3A+GPU+nodes+for+GROMACS+biomolecular+simulations&rft.jtitle=Journal+of+computational+chemistry&rft.au=Kutzner%2C+Carsten&rft.au=P%C3%A1ll%2C+Szil%C3%A1rd&rft.au=Fechner%2C+Martin&rft.au=Esztermann%2C+Ansgar&rft.date=2015-10-05&rft.pub=John+Wiley+and+Sons+Inc&rft.issn=0192-8651&rft.eissn=1096-987X&rft.volume=36&rft.issue=26&rft.spage=1990&rft.epage=2008&rft_id=info:doi/10.1002%2Fjcc.24030&rft_id=info%3Apmid%2F26238484&rft.externalDocID=PMC5042102 |
| thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0192-8651&client=summon |
| thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0192-8651&client=summon |
| thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0192-8651&client=summon |