Poor people give away a lot of information. If you’ve never lived under the poverty line, you might not be aware how much of our personal privacy we trade away for basic benefits such as food stamps, health insurance, and utility discounts. It’s not just Social Security numbers and home addresses, which are required as part of these applications; it includes health histories, household incomes, living expenses, and employment histories. Most people shrug off this exchange: What good is personal data when you have no money and terrible credit anyway — especially when you don’t really have a choice?
But after decades of collecting this data, the government is putting it to use. This information is feeding algorithms that decide everything from whether or not you get health insurance to how much time you spend in jail. Increasingly, it is helping determine whether or not parents get to keep their kids.
Get TalkPoverty In Your Inbox
When someone phones in a report of suspected child abuse — usually to a state or county child abuse hotline — a call screener has to determine whether the accusation merits an actual investigation. Sometimes they have background information, such as prior child welfare reports, to assist in their decision-making process, but often they have to make snap determinations with very little guidance besides the details of the immediate report. There are more than 7 million maltreatment reports each year, and caseworkers get overwhelmed and burn out quickly — especially when a serious case gets overlooked. New algorithms popping up around the country review data points available for each case and suggest whether or not an investigation should be opened, in an attempt to offset some of the individual responsibility placed on case workers.
The trouble is, algorithms aren’t designed to find new information that humans miss — they’re designed to use the data that humans have previously input as efficiently as possible.
“If you give it biased data, it will be biased,” explained Cathy O’Neil, mathematician and author of the book Weapons of Math Destruction, while speaking with me for a story I wrote for Undark last year. “The very short version is that when you’re using the past as a kind of reference for how it works well, you’re implicitly assuming the past is doing a good job of rewarding good things and punishing bad things. You’re training the system to say if it worked in the past, it should work in the future.”
Historically, low-income families have had their children removed from their homes at higher rates than wealthier families. As a result, these new algorithms work to codify poverty as a criteria for child maltreatment. Some of the variables that these tools consider are public records that only exist for low-income parents, such as parents’ poverty status, whether they receive welfare benefits like SNAP and TANF, employment status, and whether they receive Medicaid. Other factors, like previous criminal justice involvement and whether or not there have been allegations of substance misuse in the past, are also dramatically more likely for families living in poverty.
This bias exists even in systems that have been highly praised, like the Allegheny Family Screening Tool currently being implemented in Pittsburgh, where prior arrests and parents’ mental health histories are considered factors in whether a child should be removed. It’s similar in other, less-transparent systems, like one in Florida where tech giant SAS contracted with the Florida Department of Children and Families to research which factors were most likely to contribute to the death of a child by maltreatment. According to press releases by SAS (some of which have been unpublished since they began garnering media attention) the company used public records such as Medicaid status, criminal justice history, and substance-use treatment history.
The results led jurisdictions in Florida to zoom in on factors that apply to huge swaths of families, including mine. In April of last year, an allegation of drug use and child abandonment led Broward County, Florida child welfare investigators to investigate my family. When my drug tests were negative, the investigation pivoted to my recent financial setbacks, which had been caused by my husband’s acute health crisis. My children were ultimately removed from my care, and we have been separated for nine months for reasons that are primarily financial. My case is far from unique. Three-quarters of child protective cases in the United States are related to neglect, not abuse, and that neglect usually means lack of food, clothing, shelter, heating, or supervision: problems which are almost always the result of poverty.
Ira Schwartz, a private analytics consultant, thinks he may have found a way to help re-balance this system. He conducted a research study in Broward County — the same county in which my case is based — that discovered the current approach to child welfare substantiation is highly flawed. According to his research, 90 percent of system referrals were essentially useless, and 40 percent of court-involved cases (which typically involve child removal) were overzealous and harmful, rather than beneficial, to the families. He created his own system that, like the Allegheny tool, predicted the likelihood that a family would become re-involved with the system. But he admits quite openly that predictive algorithms like his target the poor.
“We found in our study that lower socioeconomic status was one of the significant variables that was a predictor [for reinvolvement with the system],” said Schwartz. “The issue with higher-income families is … they just don’t really come into the system because they have other options. With higher-income families, when there’s child abuse or neglect or even spouse abuse and it’s reported, they can afford to go to private agencies, get private mental health services; they can see a psychiatrist or social worker or psychologist … it’s a discrimination factor.”
Schwartz believes that these types of admittedly discriminatory computer programs can still be put to good use when combined with prescriptive analytics, which would determine the services that high-risk families need in order to remain out of the system in the future. Schwartz says this would include services like rental assistance, food assistance, day care funding, and housekeeping services. This would help welfare agencies understand which families need what services, and streamline the process of providing them. (All jurisdictions are legally required to make “reasonable efforts” to help families resolve the issues that brought them under investigation, but how agencies go about meeting that standard varies widely by location.)
The issue with these algorithms is certainly not malice on the part of their creators. Even the more secretive, proprietary algorithms being created by companies like SAS claim to want to create a safer system that results in less child maltreatment. But it’s unclear if that is possible with the data that’s available. Without comparable data from wealthier populations, which are better protected by privacy laws, the new systems cannot produce accurate results — and even if more data were added, it would mean more families are being separated and surveilled.