The first attempts to regulate artificial intelligence programs that play a hidden role in employment, housing and medical decisions because millions of Americans are facing pressure from all sides and are crashing into statehouses across the country.
Only one of seven bills aimed at preventing AI’s tendency to discriminate when making successive decisions — including who gets hired, money for a house or medical care — has passed. Colorado Gov. Jared Polis reluctantly signed the bill into law on Friday.
Colorado’s bill and those that floated in Washington, Connecticut and elsewhere faced battles on many fronts, including between civil rights groups and the tech industry, and lawmakers wary of tapping into a technology that few still understand. and governors worried about being weird. and intimidating AI startups.
Polis signed the Colorado bill “with reservations,” saying in a statement that he was wary of regulations that destroy AI innovation. The bill has a two-year track and can be amended before it becomes law.
“I encourage (lawmakers) to significantly improve this before it goes into effect,” Polis wrote.
Colorado’s proposal, along with six sister bills, are complex but would generally require companies to assess the risk of discrimination by AI and inform customers when AI has been used to help make a decision important to them .
The bills are separate from more than 400 AI-related bills that have been debated this year. Most target parts of artificial intelligence, such as using deepfakes in elections or to make pornography.
All seven bills are more ambitious, applicable to key industries and targeting discrimination, one of technology’s most perverse and complex problems.
“We don’t really have visibility into the algorithms that are being used, whether they’re working or not, or whether we’re being discriminated against,” said Rumman Chowdhury, the US State Department’s AI envoy who previously led the ethics team. of Twitter’s AI. .
Different beast
While anti-discrimination laws are already on the books, those who study AI discrimination say it’s a different animal, one that the U.S. is already behind in regulating.
“Computers are making biased decisions at scale,” said Christine Webber, a civil rights attorney who has worked on class-action discrimination lawsuits including Boeing and Tyson Foods. Now, Webber is close to final approval for one of the nation’s first settlements in an AI discrimination class action.
“No, I have to say, the old systems weren’t entirely free of bias either,” Webber said. But “each person can only look at so many resumes in a day. So you can make so many biased decisions in a day, and the computer can do it quickly on a large number of people.”
When you apply for a job, an apartment, or a home loan, there’s a good chance AI is evaluating your application: sending it online, assigning a score, or filtering it. According to the Equal Employment Opportunity Commission, it is estimated that 83% of employers use algorithms to assist in hiring.
AI itself doesn’t know what to look for in a job application, so it learns based on past CVs. Historical data used to train algorithms can be smuggled in bias.
Amazon, for example, worked on a hiring algorithm that was trained on old resumes: mostly male applicants. When evaluating new applicants, he discounted resumes with the word “women’s” or that listed women’s colleges because they were not represented in the historical records—the resumes—he learned from. The project broke down.
Webber’s lawsuit alleges that an AI system that scores rental applications disproportionately assigned lower scores to black or Hispanic applicants. A study found that an AI system built to assess medical needs passed over black patients for special care.
Studies and lawsuits have allowed a glimpse under the hood of AI systems, but most algorithms remain shrouded. Americans are largely unaware that these tools are being used, a survey by Pew Research shows. Companies are generally not required to explicitly disclose that an AI has been used.
“Just pulling back the curtain so we can see who’s really doing the assessment and what tool is being used is a big, big first step,” Webber said. “Existing laws don’t work unless we can get at least some basic information.”
That’s what the Colorado bill, along with another surviving bill in California, are trying to change. The bills, including a flagship proposal in Connecticut that was killed under opposition from the governor, are largely similar.
The Colorado bill would require companies that use AI to help make important decisions for Americans to annually assess their AI for potential bias; implementation of a supervisory program within the company; tell the state attorney general if discrimination has been found; and inform customers when an AI has been used to help make a decision for them, including an option to appeal.
Unions and academics fear that relying on companies to police themselves means it will be difficult to proactively address discrimination in an AI system before damage is done. Companies fear that forced transparency could expose trade secrets, including potential litigation, in this hyper-competitive new field.
AI companies also pushed for, and generally accepted, a provision that allows only the attorney general, not citizens, to file lawsuits under the new law. Implementation details are left to the Attorney General.
While larger AI companies have been more or less on board with these proposals, a group of smaller Colorado-based AI companies said the requirements may be manageable by giant AI companies, but not by startups. new.
“We’re in a whole new era of primordial soup,” said Logan Cerkovnik, founder of Thumper.ai, referring to the field of AI. “Having overly restrictive legislation that forces us into definitions and limits our use of technology as it’s being formed is just going to be detrimental to innovation.”
All agreed, along with many AI companies, that what is officially called “algorithmic discrimination” is critical to address. But they said the bill as written falls short of that goal. Instead, they proposed strengthening existing anti-discrimination laws.
Chowdhury worries that lawsuits are too costly and time-consuming to be an effective enforcement tool, and laws should go beyond what Colorado is proposing. Instead, Chowdhury and academics have proposed an accredited, independent organization that could explicitly test for potential biases in an AI algorithm.
“You can understand and deal with a single person who is discriminatory or biased,” Chowdhury said. “What do we do when it’s involved in the whole institution?”
___
Bedayn is a corps member for the Associated Press News Initiative/Statehouse State America Report. Report for America is a national nonprofit service program that places journalists in local newsrooms to report on undercover issues.
#Colorado #nation #pass #legislation #addressing #threat #bias #key #decisions
Image Source : www.cbsnews.com