A general concurrent algorithm for plasma particle-in-cell simulation codes
We have developed a new algorithm for implementing plasma particle-in-cell (PIC) simulation codes on concurrent processors with distributed memory. This algorithm, named the general concurrent PIC algorithm (GCPIC), has been used to implement an electrostatic PIC code on the 32-node JPL Mark III Hyp...
Saved in:
| Published in | Journal of computational physics Vol. 85; no. 2; pp. 302 - 322 |
|---|---|
| Main Authors | , |
| Format | Journal Article |
| Language | English |
| Published |
Legacy CDMS
Elsevier Inc
01.12.1989
Elsevier |
| Subjects | |
| Online Access | Get full text |
| ISSN | 0021-9991 1090-2716 |
| DOI | 10.1016/0021-9991(89)90153-8 |
Cover
| Abstract | We have developed a new algorithm for implementing plasma particle-in-cell (PIC) simulation codes on concurrent processors with distributed memory. This algorithm, named the general concurrent PIC algorithm (GCPIC), has been used to implement an electrostatic PIC code on the 32-node JPL Mark III Hypercube parallel computer. To decompose a PIC code using the GCPIC algorithm, the physical domain of the particle simulation is divided into sub-domains, equal in number to the number of processors, such that all sub-domains have roughly equal numbers of particles. For problems with non-uniform particle densities, these sub-domains will be of unequal physical size. Each processor is assigned a sub-domain and is responsible for updating the particles in its sub-domain. This algorithm has led to a very efficient parallel implementation of a well-benchmarked 1-dimensional PIC code. The dominant portion of the code, updating the particle positions and velocities, is nearly 100% efficient when the number of particles is increased linearly with the number of hypercube processors used so that the number of particles per processor is constant. For example, the increase in time spent updating particles in going from a problem with 11,264 particles run on 1 processor to 360,448 particles on 32 processors was only 3% (parallel efficiency of 97%). Although implemented on a hypercube concurrent computer, this algorithm should also be efficient for PIC codes on other parallel architectures and for large PIC codes on sequential computers where part of the data must reside on external disks. |
|---|---|
| AbstractList | The general concurrent particle-in-cell (GCPIC) algorithm has been used to implement an electrostatic particle-in-cell code on a 32-node hypercube parallel computer. The GCPIC algorithm decomposes the PIC code by dividing the particle simulation physical domain into subdomains that are equal in number to the number of processors; all subdomains will accordingly possess approximately equal numbers of particles. The portion of the code which updates particle positions and velocities is nearly 100 percent efficient when the number of particles increases linearly with that of hypercube processors. We have developed a new algorithm for implementing plasma particle-in-cell (PIC) simulation codes on concurrent processors with distributed memory. This algorithm, named the general concurrent PIC algorithm (GCPIC), has been used to implement an electrostatic PIC code on the 33-node JPL Mark III Hypercube parallel computer. To decompose at PIC code using the GCPIC algorithm, the physical domain of the particle simulation is divided into sub-domains, equal in number to the number of processors, such that all sub-domains have roughly equal numbers of particles. For problems with non-uniform particle densities, these sub-domains will be of unequal physical size. Each processor is assigned a sub-domain and is responsible for updating the particles in its sub-domain. This algorithm has led to a a very efficient parallel implementation of a well-benchmarked 1-dimensional PIC code. The dominant portion of the code, updating the particle positions and velocities, is nearly 100% efficient when the number of particles is increased linearly with the number of hypercube processors used so that the number of particles per processor is constant. For example, the increase in time spent updating particles in going from a problem with 11,264 particles run on 1 processor to 360,448 particles on 32 processors was only 3% (parallel efficiency of 97%). Although implemented on a hypercube concurrent computer, this algorithm should also be efficient for PIC codes on other parallel architectures and for large PIC codes on sequential computers where part of the data must reside on external disks. {copyright} 1989 Academic Press, Inc. We have developed a new algorithm for implementing plasma particle-in-cell (PIC) simulation codes on concurrent processors with distributed memory. This algorithm, named the general concurrent PIC algorithm (GCPIC), has been used to implement an electrostatic PIC code on the 32-node JPL Mark III Hypercube parallel computer. To decompose a PIC code using the GCPIC algorithm, the physical domain of the particle simulation is divided into sub-domains, equal in number to the number of processors, such that all sub-domains have roughly equal numbers of particles. For problems with non-uniform particle densities, these sub-domains will be of unequal physical size. Each processor is assigned a sub-domain and is responsible for updating the particles in its sub-domain. This algorithm has led to a very efficient parallel implementation of a well-benchmarked 1-dimensional PIC code. The dominant portion of the code, updating the particle positions and velocities, is nearly 100% efficient when the number of particles is increased linearly with the number of hypercube processors used so that the number of particles per processor is constant. For example, the increase in time spent updating particles in going from a problem with 11,264 particles run on 1 processor to 360,448 particles on 32 processors was only 3% (parallel efficiency of 97%). Although implemented on a hypercube concurrent computer, this algorithm should also be efficient for PIC codes on other parallel architectures and for large PIC codes on sequential computers where part of the data must reside on external disks. The general concurrent particle-in-cell (GCPIC) algorithm has been used to implement an electrostatic particle-in-cell code on a 32-node hypercube parallel computer. The GCPIC algorithm decomposes the PIC code by dividing the particle simulation physical domain into subdomains that are equal in number to the number of processors; all subdomains will accordingly possess approximately equal numbers of particles. The portion of the code which updates particle positions and velocities is nearly 100 percent efficient when the number of particles increases linearly with that of hypercube processors. (O.C.) |
| Audience | PUBLIC |
| Author | Liewer, Paulett C Decyk, Viktor K |
| Author_xml | – sequence: 1 givenname: Paulett C surname: Liewer fullname: Liewer, Paulett C organization: Jet Propulsion Laboratory, California Institute of Technology, Pasadena, California 91109 USA – sequence: 2 givenname: Viktor K surname: Decyk fullname: Decyk, Viktor K organization: Physics Department, University of California at Los Angeles, Los Angeles, California 90024 USA |
| BackLink | http://pascal-francis.inist.fr/vibad/index.php?action=getRecordDetail&idt=19802707$$DView record in Pascal Francis https://www.osti.gov/biblio/6931286$$D View this record in Osti.gov |
| BookMark | eNqFkU1rFTEUhoNU8Lb6D7oYBEUXoycfN5O4KJTiFxbc6DpkMmfaSCa5JrmC_95Mpyi40FUW53kPb55zSk5iikjIOYVXFKh8DcBor7WmL5R-qYHuea8ekB0FDT0bqDwhu9_II3JayjcAUHuhduTTZXeDEbMNnUvRHXPGWDsbblL29Xbp5pS7Q7Blsd3B5updwN7H3mEIXfHLMdjqU2zZCctj8nC2oeCT-_eMfH339svVh_768_uPV5fXvROU195NYhjlPCnYMzs5sLNcfzFSi4OmchjUPE6cj7MSAMM4zUxMbThKiaNwlPMz8nTbm0r1pjhf0d229hFdNVJzypRs0PMNOuT0_YilmsWXtbaNmI7FsD1jHEA08Nk9aIuzYc42Ol_MIfvF5p-GagVsgKFxYuNcTqVknP8gYNYPmNWxWR0bpc3dGYxqsTd_xVrfO2k1Wx_-Fz7fwtEWa1qitD66HY9TKlYRF9sYm-0fHvMqA6PDyefVxZT8v_f_AtlGqtQ |
| CitedBy_id | crossref_primary_10_1016_0010_4655_94_00165_X crossref_primary_10_1063_1_4954917 crossref_primary_10_1063_1_881386 crossref_primary_10_1017_S0022377820000434 crossref_primary_10_1371_journal_pone_0302578 crossref_primary_10_1063_1_1344919 crossref_primary_10_1006_jcph_2000_6570 crossref_primary_10_1016_j_cpc_2021_107913 crossref_primary_10_1016_0010_4655_94_00167_Z crossref_primary_10_1145_240732_240742 crossref_primary_10_1016_j_jcp_2019_108969 crossref_primary_10_1016_S0010_4655_00_00227_7 crossref_primary_10_1063_1_871304 crossref_primary_10_1002_cpe_4330070704 crossref_primary_10_1016_j_cpc_2008_07_004 crossref_primary_10_1016_j_cpc_2010_11_009 crossref_primary_10_1585_jspf_75_704 crossref_primary_10_1088_2058_6272_aac3d1 crossref_primary_10_1016_j_jcp_2011_08_002 crossref_primary_10_1177_109434209100500303 crossref_primary_10_1016_0010_4655_94_00168_2 crossref_primary_10_1016_j_jcp_2006_01_039 crossref_primary_10_1016_S0167_8191_03_00098_X crossref_primary_10_1016_j_cpc_2007_02_092 crossref_primary_10_1016_j_cpc_2016_05_008 crossref_primary_10_1063_1_873729 crossref_primary_10_1016_j_advengsoft_2022_103291 crossref_primary_10_1063_1_1594188 crossref_primary_10_1109_TPS_2006_883406 crossref_primary_10_1016_j_cpc_2007_02_059 crossref_primary_10_1029_91JA00655 crossref_primary_10_1029_96JA02665 crossref_primary_10_1016_0956_0521_91_90031_Y crossref_primary_10_1016_0307_904X_92_90071_A crossref_primary_10_1016_j_cpc_2020_107212 crossref_primary_10_1088_0741_3335_47_5A_017 crossref_primary_10_1016_j_parco_2015_10_010 crossref_primary_10_1016_S0010_4655_00_00165_X crossref_primary_10_1016_S0167_8191_00_00098_3 crossref_primary_10_1016_0956_0521_92_90093_X crossref_primary_10_1016_j_jcp_2004_01_008 crossref_primary_10_1016_j_cpc_2004_06_011 crossref_primary_10_1145_226239_226257 crossref_primary_10_1109_27_902226 crossref_primary_10_1142_S0217751X19420259 crossref_primary_10_1109_27_467980 crossref_primary_10_1017_S0001924000003894 crossref_primary_10_1002_cpe_702 crossref_primary_10_1002_ctpp_2150360260 crossref_primary_10_1016_0010_4655_94_00166_Y crossref_primary_10_1017_S0022377820000410 crossref_primary_10_1016_j_jcp_2012_04_040 crossref_primary_10_1016_j_cpc_2013_10_013 crossref_primary_10_1088_0741_3335_38_12A_021 crossref_primary_10_1016_S0010_4655_02_00795_6 crossref_primary_10_1006_jcph_1997_5670 crossref_primary_10_1109_TPS_2023_3289993 crossref_primary_10_1088_2058_6272_abf125 crossref_primary_10_1016_0010_4655_94_00169_3 crossref_primary_10_1088_1361_6595_ad31b3 crossref_primary_10_1017_S0263034600007746 crossref_primary_10_1109_MCSE_2014_131 crossref_primary_10_1029_96JA00982 crossref_primary_10_1063_1_5038039 crossref_primary_10_1016_S0010_4655_02_00389_2 crossref_primary_10_1016_S0167_739X_99_00125_9 crossref_primary_10_1016_j_jcp_2017_06_028 crossref_primary_10_1109_27_887733 crossref_primary_10_1016_j_cpc_2020_107633 crossref_primary_10_1029_93JA01172 crossref_primary_10_1016_j_jcp_2003_11_004 |
| Cites_doi | 10.1016/0021-9991(79)90123-2 10.1103/RevModPhys.55.403 10.1016/0021-9991(80)90010-8 10.1016/0021-9991(83)90083-9 10.1016/0021-9991(82)90016-X |
| ContentType | Journal Article |
| Copyright | 1989 1991 INIST-CNRS |
| Copyright_xml | – notice: 1989 – notice: 1991 INIST-CNRS |
| DBID | CYE CYI AAYXX CITATION IQODW 8FD H8D L7M OTOTI |
| DOI | 10.1016/0021-9991(89)90153-8 |
| DatabaseName | NASA Scientific and Technical Information NASA Technical Reports Server CrossRef Pascal-Francis Technology Research Database Aerospace Database Advanced Technologies Database with Aerospace OSTI.GOV |
| DatabaseTitle | CrossRef Technology Research Database Aerospace Database Advanced Technologies Database with Aerospace |
| DatabaseTitleList | Technology Research Database |
| DeliveryMethod | fulltext_linktorsrc |
| Discipline | Applied Sciences Physics |
| EISSN | 1090-2716 |
| EndPage | 322 |
| ExternalDocumentID | 6931286 19802707 10_1016_0021_9991_89_90153_8 19900031143 0021999189901538 |
| GrantInformation | DE-FG03-85ER-53173 DE-FG03-85ER-25009 |
| GroupedDBID | --K --M -~X .~1 0R~ 1B1 1RT 1~. 1~5 29K 4.4 457 4G. 5GY 5VS 6OB 6TJ 7-5 71M 8P~ 8WZ 9JN A6W AABNK AACTN AAEDT AAEDW AAIAV AAIKJ AAKOC AALRI AAOAW AAQFI AAQXK AAXUO AAYFN ABBOA ABFNM ABFRF ABJNI ABMAC ABNEU ABTAH ABXDB ABYKQ ACBEA ACDAQ ACFVG ACGFO ACGFS ACNCT ACNNM ACRLP ACZNC ADBBV ADEZE ADFGL ADIYS ADJOM ADMUD AEBSH AEFWE AEKER AENEX AFFNX AFKWA AFTJW AGHFR AGUBO AGYEJ AHHHB AHZHX AIALX AIEXJ AIKHN AITUG AIVDX AJBFU AJOXV ALMA_UNASSIGNED_HOLDINGS AMFUW AMRAJ AOUOD ASPBG AVWKF AXJTR AZFZN BBWZM BKOJK BLXMC CAG COF CS3 D-I DM4 DU5 EBS EFBJH EFLBG EJD EO8 EO9 EP2 EP3 F5P FDB FEDTE FGOYB FIRID FNPLU FYGXN G-2 G-Q GBLVA GBOLZ HLZ HME HMV HVGLF HZ~ IHE J1W K-O KOM LG5 LX9 LZ4 M37 M41 MO0 N9A NDZJH O-L O9- OAUVE OGIMB OZT P-8 P-9 P2P PC. Q38 R2- RIG RNS ROL RPZ SBC SDF SDG SDP SES SEW SHN SPC SPCBC SPD SPG SSQ SSV SSZ T5K T9H TN5 UPT UQL WUQ XFK YQT ZMT ZU3 ZY4 ~02 ~G- AAYWO ABWVN ACRPL ADNMO AGQPQ AKRWK CYE CYI ~HD AATTM AAXKI AAYXX ACLOT ACVFH ADCNI AEIPS AEUPX AFJKZ AFPUW AIGII AIIUN AKBMS AKYEP ANKPU APXCP CITATION EFKBS BNPGV IQODW SSH 8FD H8D L7M ABPTK ABQIS OTOTI |
| ID | FETCH-LOGICAL-c413t-cd47b6fd8052adc0af61016b1ae7916778fbd33bf84007bdf24db1ab66eb4c133 |
| ISSN | 0021-9991 |
| IngestDate | Fri May 19 00:38:07 EDT 2023 Sat Sep 27 23:11:06 EDT 2025 Wed Apr 02 07:25:01 EDT 2025 Wed Oct 01 05:12:22 EDT 2025 Thu Apr 24 22:50:50 EDT 2025 Sat Oct 18 19:15:27 EDT 2025 Fri Feb 23 02:32:40 EST 2024 |
| IsPeerReviewed | true |
| IsScholarly | true |
| Issue | 2 |
| Language | English |
| License | https://www.elsevier.com/tdm/userlicense/1.0 CC BY 4.0 |
| LinkModel | OpenURL |
| MergedId | FETCHMERGED-LOGICAL-c413t-cd47b6fd8052adc0af61016b1ae7916778fbd33bf84007bdf24db1ab66eb4c133 |
| Notes | CDMS Legacy CDMS ISSN: 0021-9991 ObjectType-Article-2 SourceType-Scholarly Journals-1 ObjectType-Feature-1 content type line 23 FG03-85ER25009; FG03-84ER53173 None |
| PQID | 25223004 |
| PQPubID | 23500 |
| PageCount | 21 |
| ParticipantIDs | osti_scitechconnect_6931286 proquest_miscellaneous_25223004 pascalfrancis_primary_19802707 crossref_primary_10_1016_0021_9991_89_90153_8 crossref_citationtrail_10_1016_0021_9991_89_90153_8 nasa_ntrs_19900031143 elsevier_sciencedirect_doi_10_1016_0021_9991_89_90153_8 |
| ProviderPackageCode | CITATION AAYXX |
| PublicationCentury | 1900 |
| PublicationDate | 1989-12-01 |
| PublicationDateYYYYMMDD | 1989-12-01 |
| PublicationDate_xml | – month: 12 year: 1989 text: 1989-12-01 day: 01 |
| PublicationDecade | 1980 |
| PublicationPlace | Legacy CDMS |
| PublicationPlace_xml | – name: Legacy CDMS – name: Amsterdam – name: United States |
| PublicationTitle | Journal of computational physics |
| PublicationYear | 1989 |
| Publisher | Elsevier Inc Elsevier |
| Publisher_xml | – name: Elsevier Inc – name: Elsevier |
| References | Liewer, Decyk, Dawson, Fox (BIB11) 1988; 11 Parallel Computing (to be published). Fox, Johnson, Lyzenga, Otto, Salmon (BIB9) 1988 Decyk (BIB4) 1988; 27 Buneman, Barnes, Green, Nielson (BIB7) 1980; 38 Dawson (BIB1) 1983; 55 Decyk (BIB3) 1983 Hockney, Eastwood (BIB6) 1981 Langdon, Cohen, Friedman, Brackbill, Forslund (BIB5) 1983; 51 Foster, Thomson, Wilson (BIB13) December 1987 Decyk, Dawson (BIB8) 1979; 30 Decyk, Xu (BIB10) 1987 Birdsall, Langdon (BIB2) 1985 Decyk (10.1016/0021-9991(89)90153-8_BIB10) 1987 10.1016/0021-9991(89)90153-8_BIB12 Foster (10.1016/0021-9991(89)90153-8_BIB13) 1987 Fox (10.1016/0021-9991(89)90153-8_BIB9) 1988 Dawson (10.1016/0021-9991(89)90153-8_BIB1) 1983; 55 Decyk (10.1016/0021-9991(89)90153-8_BIB4) 1988; 27 Buneman (10.1016/0021-9991(89)90153-8_BIB7) 1980; 38 Langdon (10.1016/0021-9991(89)90153-8_BIB5_1) 1983; 51 Brackbill (10.1016/0021-9991(89)90153-8_BIB5_2) 1982; 46 Birdsall (10.1016/0021-9991(89)90153-8_BIB2) 1985 Decyk (10.1016/0021-9991(89)90153-8_BIB8) 1979; 30 Decyk (10.1016/0021-9991(89)90153-8_BIB3) 1983 Liewer (10.1016/0021-9991(89)90153-8_BIB11) 1988; 11 Hockney (10.1016/0021-9991(89)90153-8_BIB6) 1981 |
| References_xml | – volume: 55 start-page: 403 year: 1983 ident: BIB1 publication-title: Rev. Mod. Phys. – reference: , Parallel Computing (to be published). – year: 1985 ident: BIB2 article-title: Plasma Physics via Computer Simulation – volume: 11 start-page: 53 year: 1988 ident: BIB11 publication-title: Math. Comput. Modelling – year: 1988 ident: BIB9 article-title: Solving Problems on Concurrent Processors – volume: 27 start-page: 33 year: 1988 ident: BIB4 publication-title: Supercomputer – year: 1981 ident: BIB6 article-title: Computer Simulation Using Particles – volume: 30 start-page: 407 year: 1979 ident: BIB8 publication-title: J. Comput. Phys. – year: December 1987 ident: BIB13 article-title: Parallel Programming without Tears publication-title: 3rd SIAM Conference on Parallel Processing for Scientific Computing – year: 1987 ident: BIB10 publication-title: Proceedings Twelfth Conf. on Numerical Simulation of Plasma – volume: 51 start-page: 107 year: 1983 ident: BIB5 publication-title: J. Comput. Phys. – year: 1983 ident: BIB3 publication-title: UCLA Center for Plasma Physics and Fusion Engineering Report PPG 708 – volume: 38 start-page: 1 year: 1980 ident: BIB7 publication-title: J. Comput. Phys. – year: 1981 ident: 10.1016/0021-9991(89)90153-8_BIB6 – year: 1983 ident: 10.1016/0021-9991(89)90153-8_BIB3 publication-title: UCLA Center for Plasma Physics and Fusion Engineering Report PPG 708 – ident: 10.1016/0021-9991(89)90153-8_BIB12 – volume: 30 start-page: 407 year: 1979 ident: 10.1016/0021-9991(89)90153-8_BIB8 publication-title: J. Comput. Phys. doi: 10.1016/0021-9991(79)90123-2 – year: 1987 ident: 10.1016/0021-9991(89)90153-8_BIB10 – volume: 27 start-page: 33 year: 1988 ident: 10.1016/0021-9991(89)90153-8_BIB4 publication-title: Supercomputer – year: 1988 ident: 10.1016/0021-9991(89)90153-8_BIB9 – volume: 55 start-page: 403 year: 1983 ident: 10.1016/0021-9991(89)90153-8_BIB1 publication-title: Rev. Mod. Phys. doi: 10.1103/RevModPhys.55.403 – year: 1985 ident: 10.1016/0021-9991(89)90153-8_BIB2 – volume: 38 start-page: 1 year: 1980 ident: 10.1016/0021-9991(89)90153-8_BIB7 publication-title: J. Comput. Phys. doi: 10.1016/0021-9991(80)90010-8 – volume: 51 start-page: 107 year: 1983 ident: 10.1016/0021-9991(89)90153-8_BIB5_1 publication-title: J. Comput. Phys. doi: 10.1016/0021-9991(83)90083-9 – year: 1987 ident: 10.1016/0021-9991(89)90153-8_BIB13 article-title: Parallel Programming without Tears – volume: 46 start-page: 271 year: 1982 ident: 10.1016/0021-9991(89)90153-8_BIB5_2 publication-title: J. Comput. Phys. doi: 10.1016/0021-9991(82)90016-X – volume: 11 start-page: 53 year: 1988 ident: 10.1016/0021-9991(89)90153-8_BIB11 |
| SSID | ssj0008548 |
| Score | 1.6333894 |
| Snippet | We have developed a new algorithm for implementing plasma particle-in-cell (PIC) simulation codes on concurrent processors with distributed memory. This... The general concurrent particle-in-cell (GCPIC) algorithm has been used to implement an electrostatic particle-in-cell code on a 32-node hypercube parallel... |
| SourceID | osti proquest pascalfrancis crossref nasa elsevier |
| SourceType | Open Access Repository Aggregation Database Index Database Enrichment Source Publisher |
| StartPage | 302 |
| SubjectTerms | 70 PLASMA PHYSICS AND FUSION TECHNOLOGY 990200 -- Mathematics & Computers ALGORITHMS COMPUTER CODES Computer Programming And Software COMPUTERS ELECTRIC FIELDS Exact sciences and technology G CODES GENERAL AND MISCELLANEOUS//MATHEMATICS, COMPUTING, AND INFORMATION SCIENCE MATHEMATICAL LOGIC Mathematical methods in physics Numerical approximation and analysis P CODES PARALLEL PROCESSING Physics PLASMA SIMULATION PROGRAMMING SIMULATION 700103 -- Fusion Energy-- Plasma Research-- Kinetics USES |
| Title | A general concurrent algorithm for plasma particle-in-cell simulation codes |
| URI | https://dx.doi.org/10.1016/0021-9991(89)90153-8 https://ntrs.nasa.gov/citations/19900031143 https://www.proquest.com/docview/25223004 https://www.osti.gov/biblio/6931286 |
| Volume | 85 |
| hasFullText | 1 |
| inHoldings | 1 |
| isFullTextHit | |
| isPrint | |
| journalDatabaseRights | – providerCode: PRVLSH databaseName: Elsevier Journals customDbUrl: mediaType: online eissn: 1090-2716 dateEnd: 99991231 omitProxy: true ssIdentifier: ssj0008548 issn: 0021-9991 databaseCode: AKRWK dateStart: 19660801 isFulltext: true providerName: Library Specific Holdings |
| link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV3Pb9MwFLagu3Dh5xBhMHwACVS55Ndi51hVoIkCB7TBOFm244yKNa2WTBP89bwXO8lgqwZcosqt48jfy_P33Of3EfKcl0WMpX0Yt7xkKS9SllvDmVW5ikOhM96qN3z4mO0fpu-O9o4Gtbf2dEmjJ-bnledK_gdVaANc8ZTsPyDb3xQa4DPgC1dAGK5_hfEUBZBxUwmTx42vtKROjlcQ8X9bthmEa2DHSzVe-95sUTHcqx_Xi6UX7hrjqfZ6A0k1rehDt2HotkF6Fv5-Yc8d4phfaJtmPJv0xNiaH62j_bz4DmH9eD4Z9hfaBKohV6PL948Y0siLPtPJ7Fxyv24noO8BHFnkL3DBA7_KxLDkdH-zz75eVfE6ylHJNIFILblJtmJw1-GIbE3nn77M-0VW7KVukfVDdacio-x13_ZS5K_80JtYx6hSNVCS0Qr8KabFqhrejNJJmlxanVvKcXCX3PYw0KmD7h65Yav75I6PG6j3yvUDMp9Sbwd0sAPa2wEFO6DODuifdkAHO6CtHWyTw7dvDmb7zItkMAP8o2GmSLnOygKlKVRhQlVmOAs6UpYD9edclLpIEl1CJB9yXZRxWsCXOsusTk2UJA9hElaVfUQocOUE5Q-4EnEqlNJxHKoENZK5NhBJByTpJlEaX0EehUxOZJcqiFMvceqlyGU79VIEhPW91q6CyjW_5x0-0rNAx-4kWNk1PbcRTgkPVcsLRhSQHcQX74aFkQ1mkMHtsjwBcpYFZPc32IeHzEUY85AH5FlnBxJ8L4KjKrs6q2UMwQtWrHu8YeAdcmt4o56QUXN6Zp8Ch230rrfnX0ufk7Q |
| linkProvider | Library Specific Holdings |
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=A+general+concurrent+algorithm+for+plasma+particle-in-cell+simulation+codes&rft.jtitle=Journal+of+computational+physics&rft.au=Liewer%2C+Paulett+C.&rft.au=Decyk%2C+Viktor+K.&rft.date=1989-12-01&rft.issn=0021-9991&rft.volume=85&rft_id=info:doi/10.1016%2F0021-9991%2889%2990153-8&rft.externalDBID=CYI&rft.externalDocID=19900031143 |
| thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0021-9991&client=summon |
| thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0021-9991&client=summon |
| thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0021-9991&client=summon |