人工智能风险的法律规制
丁晓东
The law of artificial intelligence has shifted from the substitution problem to the risk problem. The EU AI Act distinguishes risks into product risk and risk to fundamental rights, categorizes risk levels into forbidden - risk, high - risks, limited - risks and minimal - risks, and makes special provisions for generative artificial intelligence and foundation models. While its classification is reasonable, there is also internal tension. Its risk grading is unscientific and rigid, and the regulation of generative artificial intelligence and large model is unreasonable. The root cause of the EU AI Act legislative dilemma in artificial intelligence lies in its excessive pursuit of unified regulation of AI risk. China's artificial intelligence legislation should not simply follow the example of the European Union. China should adhere to the principle of contextual regulation of AI risk, and supervise risk according to specific industries, sectors, and existing legal relationships. The law should take AI users or their integrated products as the object of risk regulations. For AI providers and AI systems themselves, the law should focus on self - regulation and limited issues such as national security and major public safety regulation. The law should also distinguish market entities and public authorities and utilize the tort law to impose ex post regulation on the former ones.
:法律科学(西北政法大学学报)