Voltar lista
Traduzir:
19 novembro 2018

Is the Government Really in Control of Its Algorithms?

By : Bejnjamin Powers Clumsy decisions made by software are leading to calls for automation to be audited Elizabeth Brico’s daughters were removed from her custody in April 2018, in part, she believes, because of an algorithm. Brico was living with her in-laws in Florida while her husband grappled with mental health issues. While they

By : Bejnjamin Powers

Clumsy decisions made by software are leading to calls for automation to be audited

Elizabeth Brico’s daughters were removed from her custody in April 2018, in part, she believes, because of an algorithm.

Brico was living with her in-laws in Florida while her husband grappled with mental health issues. While they didn’t always get along, a tense peace had held. But when arguments threatened to boil over, Brico took a short trip to Miami to let things cool down.

“I had my phone on me and remained in text/phone contact with my in-laws, but shortly before returning they called the child abuse hotline and reported that I had disappeared without contact to use drugs,” says Brico, who has been in pharmacotherapy and counseling for five years for previous substance abuse. “My mother-in-law told them I was a heroin addict. I’d given birth to Anabelle while on prescribed methadone in Florida, so there was a record of my OUD treatment. The investigator made no attempt to contact me. She filed for a shelter petition and I learned about it the evening before the court hearing.”

Brico believes that an algorithm unfairly reinforced historic factors in her case, which weighed heavily with those who made the decision to remove her children. Florida, she writes, is one of the first states in the country to implement predictive analytics as part of their child welfare system.

The system repeats a common prejudice: It negatively impacts poor and vulnerable communities who rely most heavily on public government programs.

The Florida Department of Children and Families (DCF) used a system compiled by a private analytics company called SAS, which relies on publicly available data including criminal records, drug and alcohol treatment data, and behavioral health data from 2007 to mid-2013, the company’s website states. The system profiled families and tried to identify factors that might predict and prevent the abuse or death of a child. The findings happened to match previous less-scientific, qualitative research conducted by the DCF over the past two years, Interim Secretary Mike Carroll said in a 2014 interview.

The system repeats a common prejudice with algorithmic systems rolled out by the government: It negatively impacts poor and vulnerable communities who rely most heavily on public government programs. Privately insured patients were excluded from the system because privacy laws protected their personal data.

Across the U.S., similar systems are being tested with varying degrees of success. In Hillsborough County, Fla., a real-time algorithm designed by a non-profit called Eckerd Connects uses data to identify children at risk of serious injury or death — yet a similar algorithm from Eckerd Connects was shut down in Illinois because of its unreliability. Another algorithmic system that SAS devised for child protection was shuttered in Los Angeles after concerns that it generated an extremely high rate of false positives.

Technology companies like to describe algorithms as neutral and autonomous, but there is growing concern over the bias of these kinds of systems. Algorithms regularly misfire: Amazon had to shelve its own recruitment algorithm because it was biased against women; an algorithm used to calculate recidivism was shown to be biased against people of color; and small businesses were kicked off a USDA food stamps program because of questionable fraud charges.

Just like the humans that designed them, algorithms and machine learning carry biases wherever they go. A system designed today to find the new CEO of a company, for example, might use recent data on top performers. But that data would suggest a lot of older white men, perpetuating structural issues of sexism and racism that have held back better candidates.

“All of these algorithms are going to have to be audited in the future, or there is going to be some sort of regulation.”

One solution to implicit bias is to have algorithms audited independently, a trend growing in the private sector. Entrepreneur Yale Fox took this approach when setting up his business Rentlogic, which uses public inspection data to inform an algorithm that grades landlords and their buildings in New York City. Fox worked with a specialist consultancy run by Cathy O’Neil, whose 2016 book Weapons of Math Destruction first brought the issue of algorithmic bias to public awareness. Much like a financial audit, O’Neil Risk Consulting & Algorithmic Auditing (ORCAA) reviews algorithmic systems for impact, effectiveness, and accuracy.

“Algorithms are eating the world,” says Fox. “When you have a machine determining things that impact humans lives, people want to know about it. All of these algorithms are going to have to be audited in the future, or there is going to be some sort of regulation.”

Fox says that the independent audit involved a simple process of allowing methodical access to its code. He feels it helped build trust among his company’s stakeholders, and gave his brand some transparency (while not opening the system completely in a way that would allow landlords to game it). The audit will be repeated next year, after incremental changes and updates to the system.

Another company, a recruitment firm called Pymetrics, created its own auditing tool called AuditAI, and then shared it on Github for others to download for free. As recruiters, their audit involved checking the information of tens of thousands of candidates against a series of tests for any bias.

“We might have a version of the algorithm where 80 percent of Indian women are passing, but only 40 percent of African-American women are passing,” says Priyanka Jain, head of product for Pymetrics. “We can see there is a discrepancy in the different pathways relating to ethnicity. So what we would do is say, ‘Okay that algorithm isn’t fair,’ and go through all the different versions to find one that meets the laws laid out by the federal government’s Equal Employment and Opportunity Commission.”

Experts say it is difficult to estimate how many algorithm-based decision-making tools have been rolled out across the sprawling state and federal divisions of the U.S. government, yet there is growing evidence that poorly designed algorithms are a problem.

Anyone developing algorithms, as well as those auditing them, should look at the raw input data used to train the machine learning process, analyzing each part of the algorithm, the design process, and source code for bias.

“In many real cases of bias and injustice, the decisions are made in this way,” says Jeanna Matthews, associate professor of computer science at Clarkson University and co-chair of the U.S. Public Policy Committee of ACM working group on algorithmic accountability and transparency. “And I think there is a growing awareness of that, but there are a lot of forces that don’t want to open those boxes.”

A September 2018 report from AI Now, an NYU research institute examining the social implications of artificial intelligence, found that many health care agencies were failing to adequately assess of the true costs of these systems and their potential impact on citizens.

“Many states simply pick an assessment tool used by another state, trained on that other state’s historical data, and then apply it to the new population, thus perpetuating historical patterns of inadequate funding and support,” the report states.

Continue Reading : https://medium.com/s/story/is-the-government-really-in-control-of-its-algorithms-6d5e1781bed2

Source : MEDIUM  – Technology

Credit: simpson33/iStock/Getty Images Plus