Context matters

We have just examined the issues of algorithmic bias in one setting from the criminal justice system, where the consequences of decisions influenced by predictive analytics are both punitive and significant. Detaining defendants during the pretrial period can lead to individuals being separated from their families and communities, stigmatization, and job loss, among other negative impacts.

However, predictive analytics also plays a pivotal role in service decisions where the consequences are largely beneficial. For instance, students identified as having a high risk of not completing high school on time may receive additional tutoring, counseling, or parental outreach. Families estimated to be at high risk of homelessness may be granted housing assistance, job training, and social supports. In the financial sector, clients at high risk of credit default could be provided with financial counseling to enhance their credit worthiness.

When predictive analytics identifies individuals or groups to receive extra help, bias may be measured and considered differently compared to punitive contexts. In scenarios where analytics are used for assistance, a bias that leads to over-identification can result in more people receiving help. While this pattern may lead to resource strain, the immediate ethical implications aren’t as severe as those in punitive contexts. Here, the main concern is under-identification bias, where some individuals or groups who need help are overlooked.

To measure whether over or under-identification may be occurring, statistical tests that focus on the rates of false negatives among different subpopulations can be useful. We would look for disparities in the likelihood of being identified as ‘at risk’ and thereby eligible for additional support among different racial, gender, socioeconomic, or geographic groups. Special attention should be given to marginalized or vulnerable populations to ensure they are not systematically deprived of necessary assistance due to biases in predictive models.

Bias might be weighed by considering both the immediate and long-term impacts on individuals and communities. In the assistive context, the ethical stakes, while still high, are somewhat alleviated by the benevolent intention behind the intervention. The focus then shifts to ensuring the equitable allocation of resources and opportunities to mitigate potential harm from under-identification. But also, while interventions may be designed to be beneficial, there still may be negative consequences as well, such as burden placed on individuals by extra “services” and stigma that may arise for those designated for extra services.

Broader ethical considerations must always remain centered as well. As a review, see the earlier section on “What are the ethical considerations of using PA to improve services?”.

Note that Aequitas, an open source bias toolkit, is a helpful resource for auditing predictive models. It takes context into account, including whether decisions based on results are “punitive” or “assistive.” Optionally, you can read their paper, linked at their website.

Back to top