Scholarship@WashULaw

Document Type

Response or Comment

Publication Date

2023

Publication Title

Regulations.gov

Abstract

These comments are a response to the National Telecommunications and Information Administration's 2023 request for comment on AI accountability (AI Accountability RFC, NTIA–2023–0005).

Responding to NTIA’s recent inquiry into AI assurance and accountability, we offer two main arguments regarding the importance of substantive legal protections. First, a myopic focus on concepts of transparency, bias mitigation, and ethics (for which procedural compliance efforts such as audits, assessments, and certifications are proxies) is insufficient when it comes to the design and implementation of accountable AI systems. We call rules built around transparency and bias mitigation “AI half-measures,” because they provide the appearance of governance but fail (when deployed in isolation) to promote human values or hold liable those who create and deploy AI systems that cause harm. Second, any rules and regulations concerning AI systems must focus on substantive interventions rather than mere procedure. Flexible consumer protection standards, such as prohibitions on unfair, deceptive, and abusive acts or practices, are the kind of technology neutral measures which will protect individuals from harmful or unreasonably risky deployments of AI systems and encourage responsible innovation. Woven together as a vast regulatory fabric, these principles can invigorate and strengthen procedural tools such as audits and certifications, to the benefit of consumers both individually and as a group.

Keywords

Artificial Intelligence, AI, AI Assurance, AI Accountability, Transparency, Bias, Ethics, Trust, Loyalty, Consumer Protection

Publication Citation

Neil M. Richards, Woodrow Hartzog & Jordan Francis, Comments of the Cordell Institute on AI Accountability, Cordell Institute for Policy in Medicine & Law (June 12, 2023), https://www.regulations.gov/comment/NTIA-2023-0005-1291

Share

COinS