Bias in Generative AI Systems: A 3-Layer Response and Liability Determination

The risk of bias is widely noticed in the entire process of generative artificial intelligence (generative AI) systems.To protect the rights of the public and improve the effectiveness of AI regulations, feasible measures to address the bias problem in the context of large data should be proposed as...

Full description

Saved in:
Bibliographic Details
Published in当代社会科学(英文) no. 2; pp. 121 - 138
Main Authors Tang Shuchen, Jiang Huiwen
Format Journal Article
LanguageChinese
Published 四川省社会科学院研究生学院 30.03.2024
Subjects
Online AccessGet full text
ISSN2096-0212

Cover

More Information
Summary:The risk of bias is widely noticed in the entire process of generative artificial intelligence (generative AI) systems.To protect the rights of the public and improve the effectiveness of AI regulations, feasible measures to address the bias problem in the context of large data should be proposed as soon as possible.Since bias originates in every part and various aspects of AI product lifecycles, laws and technical measures should consider each of these layers and take different causes of bias into account, from data training, modeling, and application design.The Interim Measures for the Administration of Generative AI Service (the Interim Measures), formulated by the Office of the Central Cyberspace Affairs Commission (CAC) and other departments have taken the initiatives to govern AI.However, it lacks specific details on issues such as how to prevent the risk of bias and reduce the effect of bias in decision-making.The Interim Measures also fail to take causes of bias into account, and several principles mu
ISSN:2096-0212