Algorithm vs. Algorithm
Critics raise alarm bells about governmental use of digital algorithms, charging that they are too complex, inscrutable, and prone to bias. A realistic assessment of digital algorithms, though, must acknowledge that government is already driven by algorithms of arguably greater complexity and potent...
Saved in:
| Published in | Duke law journal Vol. 71; no. 6; pp. 1281 - 1340 |
|---|---|
| Main Authors | , |
| Format | Journal Article |
| Language | English |
| Published |
Duke University, School of Law
01.03.2022
|
| Subjects | |
| Online Access | Get full text |
| ISSN | 0012-7086 |
Cover
| Abstract | Critics raise alarm bells about governmental use of digital algorithms, charging that they are too complex, inscrutable, and prone to bias. A realistic assessment of digital algorithms, though, must acknowledge that government is already driven by algorithms of arguably greater complexity and potential for abuse: the algorithms implicit in human decision-making. The human brain operates algorithmically through complex neural networks. And when humans make collective decisions, they operate via algorithms too-those reflected in legislative, judicial, and administrative processes. Yet these human algorithms undeniably fail and are far from transparent. On an individual level, human decision-making suffers from memory limitations, fatigue, cognitive biases, and racial prejudices, among other problems. On an organizational level, humans succumb to groupthink and free riding, along with other collective dysfunctionalities. As a result, human decisions will in some cases prove far more problematic than their digital counterparts. Digital algorithms, such as machine learning, can improve governmental performance by facilitating outcomes that are more accurate, timely, and consistent. Still, when deciding whether to deploy digital algorithms to perform tasks currently completed by humans, public officials should proceed with care on a case-by-case basis. They should consider both whether a particular use would satisfy the basic preconditions for successful machine learning and whether it would in fact lead to demonstrable improvements over the status quo. The question about the future of public administration is not whether digital algorithms are perfect. Rather, it is a question about what will work better: human algorithms or digital ones. |
|---|---|
| AbstractList | Critics raise alarm bells about governmental use of digital algorithms, charging that they are too complex, inscrutable, and prone to bias. A realistic assessment of digital algorithms, though, must acknowledge that government is already driven by algorithms of arguably greater complexity and potential for abuse: the algorithms implicit in human decision-making. The human brain operates algorithmically through complex neural networks. And when humans make collective decisions, they operate via algorithms too-those reflected in legislative, judicial, and administrative processes. Yet these human algorithms undeniably fail and are far from transparent. On an individual level, human decision-making suffers from memory limitations, fatigue, cognitive biases, and racial prejudices, among other problems. On an organizational level, humans succumb to groupthink and free riding, along with other collective dysfunctionalities. As a result, human decisions will in some cases prove far more problematic than their digital counterparts. Digital algorithms, such as machine learning, can improve governmental performance by facilitating outcomes that are more accurate, timely, and consistent. Still, when deciding whether to deploy digital algorithms to perform tasks currently completed by humans, public officials should proceed with care on a case-by-case basis. They should consider both whether a particular use would satisfy the basic preconditions for successful machine learning and whether it would in fact lead to demonstrable improvements over the status quo. The question about the future of public administration is not whether digital algorithms are perfect. Rather, it is a question about what will work better: human algorithms or digital ones. |
| Audience | Professional |
| Author | Alicia Lai Cary Coglianese |
| Author_xml | – sequence: 1 fullname: Coglianese, Cary – sequence: 2 fullname: Lai, Alicia |
| BookMark | eNpt0UtLAzEQB_AcKthWz14LngS3TB7mcSxFbaHgRc8hm82mkX3IJhU_vqFVcaEEEmb4_UOGzNCk6zs3QVMATAoBkl-iWYzvAMApxVN0s2p8P4S0bxefcbn4q67QRW2a6K5_zjl6e3p8XW-K3cvzdr3aFZ4qngoCtASiKkUEJhSXpcBUSgvKYsm4kJwwZy2T4qEGS0XlSOWcrJgCWXOsCJ2j29O93jROh67u02BsG6LVK64yk1SorIozyrvODabJA9Yht0d-ecbnVbk22LOBu1Egm-S-kjeHGPV2sx3b-3-2PMTQuZi3GPw-xVNkxDcnPrQhaeND_Eg6OjPY_fFlx3Y_eF31QWPQ-Vv4LyNAGGBgGDCTlH4DzA2KfQ |
| ContentType | Journal Article |
| Copyright | COPYRIGHT 2022 Duke University, School of Law |
| Copyright_xml | – notice: COPYRIGHT 2022 Duke University, School of Law |
| DBID | N95 IHI ILT |
| DatabaseName | Business: Insights (Gale) Gale In Context: U.S. History Gale OneFile: LegalTrac |
| DatabaseTitleList | |
| DeliveryMethod | fulltext_linktorsrc |
| Discipline | Law |
| EndPage | 1340 |
| ExternalDocumentID | A699088379 10.3316/agispt.20240104101483 |
| Genre | Articles |
| GeographicLocations | UNITED STATES |
| GeographicLocations_xml | – name: UNITED STATES |
| GroupedDBID | --- .4L .CB 0ZK 2-G 21D 29G 2QL 2WC 5.J 5GY 6DY 7LF 8OO 96U AACLI AAFWJ ABACO ABDBF ABFRF ABVAB ACBMB ACGFO ACHQT ACMJI ACUHS ADCHZ ADEPB ADEYR ADMHG ADUOI AEFWE AEGXH AEGZQ AFACB AFAZI AFXCU AGQRV AHEHV AHQJS AIAGR AKNUK AKVCP AL2 ALMA_UNASSIGNED_HOLDINGS AY0 BAAKF BHRNT CS3 DU5 E3Z EBD EBS EHL EJD EKAWT ESX F5P F8P FM. FRS GCQ HCSNT HISYW HLR HOCAJ IAO ICJ IEA IHI ILT INH INR IOF IPB ITC JAV LBL LMKDQ LXB LXHRH LXL LXN LXO LXY N95 NXXTH O2D OK1 OVT P2P PV9 Q.- RHO RNS RWL RXW RZL SJN TAA TAC TAE TAF TQQ TQW TR2 TWJ UFL UNMZH UXK UXR VKN W2G WE1 X6Y XFL XPM XRM XSB ZRF ZRR ZYG ~X8 ~ZZ |
| ID | FETCH-LOGICAL-g396t-203b029d9271231bb71388c09c184678624ecc4875f0c37de2dee8d4908f61923 |
| ISSN | 0012-7086 |
| IngestDate | Mon Oct 20 21:56:14 EDT 2025 Thu Jun 12 23:50:53 EDT 2025 Mon Oct 20 16:22:49 EDT 2025 Thu Oct 16 15:06:05 EDT 2025 Fri Jun 27 02:35:36 EDT 2025 Wed Sep 24 03:16:58 EDT 2025 |
| IsPeerReviewed | true |
| IsScholarly | true |
| Issue | 6 |
| Language | English |
| LinkModel | OpenURL |
| MergedId | FETCHMERGED-LOGICAL-g396t-203b029d9271231bb71388c09c184678624ecc4875f0c37de2dee8d4908f61923 |
| Notes | Duke Law Journal, Vol. 71, No. 6, Mar 2022, 1281-1340 Informit, Melbourne (Vic) |
| PageCount | 60 |
| ParticipantIDs | gale_infotracmisc_A699088379 gale_businessinsightsgauss_A699088379 gale_incontextgauss_IHI_A699088379 gale_infotracacademiconefile_A699088379 gale_infotracgeneralonefile_A699088379 rmit_agispt_search_informit_org_doi_10_3316_agispt_20240104101483 |
| PublicationCentury | 2000 |
| PublicationDate | 20220301 |
| PublicationDateYYYYMMDD | 2022-03-01 |
| PublicationDate_xml | – month: 03 year: 2022 text: 20220301 day: 01 |
| PublicationDecade | 2020 |
| PublicationTitle | Duke law journal |
| PublicationYear | 2022 |
| Publisher | Duke University, School of Law |
| Publisher_xml | – name: Duke University, School of Law |
| SSID | ssj0006331 |
| Score | 2.3346856 |
| Snippet | Critics raise alarm bells about governmental use of digital algorithms, charging that they are too complex, inscrutable, and prone to bias. A realistic... |
| SourceID | gale rmit |
| SourceType | Aggregation Database Publisher |
| StartPage | 1281 |
| SubjectTerms | Administrative agencies Administrative law Algorithms Analysis Artificial intelligence Critical legal studies Decision-making Influence Judicial power Laws, regulations and rules Legislative bodies Machine learning Methods Powers and duties Prevention Race discrimination Set (Psychology) |
| Title | Algorithm vs. Algorithm |
| URI | http://search.informit.org/doi/10.3316/agispt.20240104101483 |
| Volume | 71 |
| hasFullText | 1 |
| inHoldings | 1 |
| isFullTextHit | |
| isPrint | |
| journalDatabaseRights | – providerCode: PRVEBS databaseName: Academic Search Ultimate - eBooks issn: 0012-7086 databaseCode: ABDBF dateStart: 20100201 customDbUrl: https://search.ebscohost.com/login.aspx?authtype=ip,shib&custid=s3936755&profile=ehost&defaultdb=asn isFulltext: true dateEnd: 99991231 titleUrlDefault: https://search.ebscohost.com/direct.asp?db=asn omitProxy: true ssIdentifier: ssj0006331 providerName: EBSCOhost |
| link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1JT9tAFB6R9MIFFVpaWkARouUQGcUex8vRLCGpklaCgHKzZrWQUFLFjir11_OeZ3BsKQfgMoonn5zJvNHb5i2EnCrWl8D5lSN4yBzf5dThVEpHB6H0glgIV6NDf_I7GN77v2b9WbNC8Krg5-L_xryS91AV5oCumCX7BspWL4UJ-Az0hREoDOOraJyMb_7cjqbDSffh7rxbPdUVzjL64on969ZfWKZlZOjgUKav4iVbRwOPTYPq5AkdHnWnANiTVVTUpsiO0pFqinqai4c6R3RBw-7ZctSWI5qmKJbydfaG125rwVGF8yVBjOFSNIxbpEVdr00-JBdXF4NKJAa2MeTLj1Xyr1a1oBTo049kx2rincRs6y7ZUvM90oJlfyJfqq3swMZ2qqfP5H5wPb0cOraDhJPROCjgsFDe82IZeyFIaJdzMMmjSPRi4aLehckxcITRZtM9QUOpPKlUJPEyVJeW5T5pzxdz9ZV0BNV4Sy9jHWnf9Rmnvt_vR6HvaaVFJA_ID_xLqe0dCkOO3pU8Y6s8T9c7dEBOShzW3phjcI8BjIajBujMgvSiWDLBbK4ELAXLdTWQPxvIzBQr3wQ8bACBi4jG18hYipRlj_nfIjVsNjXFe2F6scxSkLEpmIxAyuAF5mGFPDDoseFzRL-9etXfyfb61B6SdrFcqSNQ7gp-bI_OM_UkTVU |
| linkProvider | EBSCOhost |
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=ALGORITHM+VS.+ALGORITHM&rft.jtitle=Duke+law+journal&rft.au=Coglianese%2C+Cary&rft.au=Lai%2C+Alicia&rft.date=2022-03-01&rft.pub=Duke+University%2C+School+of+Law&rft.issn=0012-7086&rft.volume=71&rft.issue=6&rft.spage=1281&rft.externalDocID=A699088379 |
| thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0012-7086&client=summon |
| thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0012-7086&client=summon |
| thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0012-7086&client=summon |