Legal Requirements to Mitigate Bias in AI Systems | Wilson Sonsini Goodrich & Rosati

An alphabet soup of US government agencies has taken steps to regulate artificial intelligence (AI). Last year, Congress passed the National Artificial Intelligence Initiative Act, which creates many new AI initiatives, committees, and workflows to prepare the federal workforce, lead and fund research, and identify and mitigate risks. In November 2021, the White House announced efforts to create a bill of rights for an automated society. And members of Congress are introducing bills like the Algorithmic Accountability Act and the Algorithmic Fairness Act, aimed at promoting ethical AI decision-making. At the state level, at least 17 state legislatures have introduced AI legislation in 2021.

With this flurry of activity, you might think that no legal requirements involving AI exist today. But you would be wrong. There are plenty of requirements that touch AI already in the books, and some pack a big punch. Here are some U.S. local, state, and federal requirements to be aware of:

  • National and local rules on AI in recruitment: The New York City Council has passed a measure prohibiting New York employers from using automated employment decision tools to screen job applicants unless the technology has been tested. “biased” audit during the year preceding the use of the tool. Illinois requires employers using AI interviewing technology to inform applicants of how AI works and obtain consent from the applicant. Maryland also requires consent from employers using facial recognition tools when interviewing applicants.
  • Federal laws on the use of AI for eligibility decisions: Under the Fair Credit Reporting Act (FCRA), a provider that gathers information to automate decision-making regarding an applicant’s eligibility for credit, employment, insurance, housing, or benefits or similar transactions may be a “consumer reporting agency”. This triggers obligations for businesses that use the provider’s services, such as the obligation to provide notice of adverse action to the claimant. For example, suppose an employer purchases AI-based scores to assess whether a candidate will be a good employee. In many circumstances, if the employer denies the candidate a job based on the score, they must, among other things, provide the candidate with a Notice of Adverse Action, which tells the candidate they can access the underlying information. from the supplier and correct them. if it is wrong.
  • Civil Rights Laws: Although this does not apply specifically to AI, companies should be aware of federal prohibitions against discrimination based on protected characteristics such as race, color, sex or gender, religion, age, disability status, national origin, marital status and genetic information. These laws apply regardless of whether a human or a machine discriminates. Indeed, in 2019, the Department of Housing and Urban Development sued Facebook for breaching the Fair Housing Act, alleging that it allowed advertisers to eliminate certain categories of consumers from its advertising algorithm, which is based on racial characteristics. If your AI tool discriminates against a protected class, whether intentionally or unintentionally, you could be the subject of a civil rights investigation or lawsuit.
  • Privacy Laws: Given the possibilities of using AI in the healthcare industry, AI developers should be familiar with the requirements of the Health Insurance Portability and Accountability Act (HIPAA). When using consumer data to populate algorithms, companies must also consider federal and state privacy laws that require consumers to be informed of how their information will be used, including HIPAA, COPPA (Children’s Online Privacy Protection Act) and the Gramm-Leach-Bliley Act. Act. California privacy laws give consumers the right to be informed about how data is collected about them and the right to access, delete and opt out of certain disclosures of their data to third parties. , which may involve AI-based systems. California’s new privacy agency is tasked with issuing regulations that will require companies to provide consumers with meaningful information about the logic involved in automated decision-making processes, a description of the likely outcome of the process in terms of concerns the consumer and the right to opt out. New privacy laws in Virginia and Colorado will also require companies to offer an opt-out option for certain automated processing of consumer data. And several state laws, such as Illinois’ Biometric Information Privacy Act (BIPA), require notice and consent before collecting biometric identifiers, which can feed algorithms.
  • Prohibitions of unfair or deceptive practices: The FTC and corresponding state laws prohibit unfair or deceptive practices. For example, if you make false or unsubstantiated statements about the lack of bias in your algorithm, this could be a misleading practice. The Federal Trade Commission has also said that using an algorithm that discriminates against protected classes could be an unfair practice.

The consequences of violating these laws can be serious. For example, federal agencies can seek and obtain civil penalties for violations of HIPAA and COPPA. The Fair Credit Reporting Act, civil rights statutes, and some state privacy statutes like BIPA include private rights of action, where plaintiffs seek and often obtain significant damages.

So what should companies that create and use algorithms do now to avoid violating these requirements? At the very least, they should think about these issues, ask questions, assess the risks, and mitigate those risks. Here are a few tips:

  • Develop interdisciplinary/diverse teams to develop/review algorithms: Lawyers, engineers, economists, data scientists, ethicists and others can spot different issues. Creating diverse teams with different perspectives and life experiences is essential to any effort to reduce bias.
  • Educate your teams on the causes of biases and how to mitigate them: A cause of bias could be a lack of diverse representation in the dataset used to train the algorithm. But even if the dataset is diverse, it can reproduce historical patterns of bias. ProPublica did a study a few years ago that found that, according to an algorithm judges used to determine whether defendants should be released on bail, black defendants were twice as likely as white defendants to be ranked at wrong as having a higher risk of violent recidivism. Institutional biases in the US criminal justice system may have been responsible for this result.
  • Use a risk-based approach to determine appropriate solutions: An algorithmic decision about who receives an ad for a financial or educational opportunity should receive more scrutiny than a decision about a shoe ad. An algorithm on who gets a job or credit should receive more scrutiny than an algorithm that determines whether a customer will get a product upgrade. Indeed, if you are making eligibility decisions based on an algorithm, consider whether you need to comply with the FCRA. At the same time, even low-risk decisions deserve to be challenged and discussed internally.
  • Ask questions, develop and document programs, policies and procedures based on the answers: What will the automated decision do? Which groups are you concerned about in terms of training data errors, disparate processing, and impact? How will potential biases be detected, measured and corrected? What will you gain in developing the algorithm, and what are the potential bad results? How open will you be to the algorithm development process? A good set of sample questions can be found here.
  • Consider effective ways to test the algorithm before deploying it: If there is a disparate impact for different groups, ask additional questions. Do you need additional training data? Do you need human intervention or review before making a decision based on data?
  • Consider periodic audits: Periodic audits can reveal persistent problems with an algorithm. Depending on the risk level of the algorithmic decision, these audits can be internal or external. Companies may also consider verifying their programs with outside advocacy groups.
  • Comply with privacy laws: Be aware of the legal requirements outlined above and be sure to follow your company’s policies regarding the use of consumer data. This includes honoring the privacy statements you make to consumers.
  • Consider how you would explain any algorithmic decision: Some may say that it is too difficult to explain the myriad of factors that could affect algorithmic decision-making, but under the Fair Credit Reporting Act, such an explanation is required in certain circumstances. For example, if a credit score is used to deny credit or offer credit on less favorable terms, the law requires consumers to receive notice, a description of the score and the main factors that negatively affected the pointing. If you’re using AI to make consumer decisions in any context, consider how you would explain your decision to your customer if asked.

Comments are closed.