From algorithm-based sentencing in courtrooms to self-driving cars, our society is increasingly reliant on artificial intelligence-based software to help make decisions. For all their potential benefits, AI systems can have inherent flaws and cause harmful effects, particularly on vulnerable groups, communities and individuals.

The potential harms are far-reaching. For example, algorithmic systems used to determine people’s entitlement to benefits, payments or other state support have been shown on many occasions to make inaccurate and discriminatory decisions. In France, for example, an investigation found the government’s adoption of an automated system for social security benefit distribution led to delays and errors in 1 percent to 2 percent of applications, affecting 60,000 to 120,000 people.

Similar issues had arisen in the United States, as when Michigan deployed an algorithmic system to reduce fraud incidence in unemployment insurance claims. Allegations of claimant fraud increased fivefold under the system, 90 percent of which was later discovered to be incorrect. In the meantime, however, thousands of people had wages and bank accounts garnished for repayment.

In some cases, errors in AI-based technologies can even result in physical injury. Late last year, a manslaughter trial began in Los Angeles for a fatal crash caused by a Tesla in “self-driving” mode, raising questions about who should be responsible. AI-based systems that control car speed and steering were involved in nearly 400 crashes between July 2021 and May 2022.

Part of the challenge is that AI technologies can make the wrong decisions even when functioning as designed. Thorniest of all, several parties are often involved in the development and deployment of AI systems, including developers and users, and AI systems often operate autonomously, without human intervention or control. If a decision made by an AI technology leads to harm, who is responsible? The programmer who “trained” the machine learning-based system in the first place? Or is it the company putting the technology “into the wild”?

With AI systems increasingly being deployed across sectors, it is vital that pathways for redress are established and maintained to ensure that consumers, data subjects or other users of AI systems have recourse in the event they suffer harm.

Governments and civil society groups are already working to create effective governance mechanisms that minimize the risks associated with AI technologies — such as the White House’s recently released “AI Bill of Rights,” the European Union’s AI Act, and the United States’ National AI Initiative — but they have largely failed to account for remediation or redress processes in the event of harmful impacts. The European Union’s AI Act, for example, does not contemplate or provide private rights of action to individuals or groups negatively affected by a deployed AI system — an oversight that has been criticized by civil society and human rights organizations.

The good news? A variety of redress mechanisms are available. In a report published by the University of California, Berkeley’s Center for Long-Term Cybersecurity, AI’s Redress Problem: Recommendations to Improve Consumer Protection from Artificial Intelligence, we outline gaps in existing regulation around AI and propose a range of potential solutions that regulators, corporations, and civil society organizations can use to facilitate greater accountability for the use of AI through redress mechanisms.

First, regulators must allow private rights of action and ensure that individuals harmed by the deployment of AI systems can make a regulatory complaint or pursue legal action. Regulators should also allow collective redress, reducing the burden on individuals and ensuring that harmed groups and communities have access to justice. To be able to exercise this right, individuals will need to know when AI systems are in use. Therefore, a right to be informed is a necessary precursor to the right to a private action for recourse.

Policymakers can also make it easier for individuals and communities to raise their grievances, seek accountability and obtain redress by establishing a dedicated AI ombudsman to serve as an independent arbiter of disputes or complaints. An AI ombudsman could also serve as a public repository of AI incidents to allow for regulatory monitoring of trends or risks arising from specific use cases, technologies or industries, providing an opportunity for developers and deployers of AI systems to learn from each other’s mistakes.

Second, consumer organizations, academia and other research institutes should be empowered to either represent individual consumers or bring “general interest” complaints against AI systems that have negative effects. Civil society organizations should keep an ear close to the ground and work with underserved individuals or communities to identify harmful effects and seek redress. They can also help ensure that findings from engagement with communities, audits or research are publicly available.

Third, companies that make AI systems should establish internal ombudsman services to receive and review complaints from stakeholders, including employees and consumers. They also need to engage with external stakeholders, such as academic researchers or consumer advocacy groups, to identify issues of bias, discrimination or unfairness that may exist in AI models. By allowing academic access to these systems, particularly before deployment, companies can obtain valuable and diverse feedback, which can only improve the performance of such systems and prevent the deployment of biased or inaccurate systems.

AI is already transforming our world and will continue to be woven into nearly every aspect of our lives. But developers of AI systems that cause harm should not be able to avoid liability or leave individuals or communities unable to obtain redress for harms suffered due to those systems. Companies, governments and civil society organizations should work now to establish a robust framework to ensure that humans have recourse when AI inevitably causes serious harm in the future.